Jackson / No serializer found for class - scala

Neo4j server provides a REST api dealing with Json format.
I use spring-data-neo4j to map a domain object (in Scala) to a neo4j node easily.
Here's an example of my User node:
#NodeEntity
class User(#Indexed #JsonProperty var id: UserId)
UserId being a value object:
final case class UserId(value: String) {
override def toString = value
}
object UserId {
def validation(userId: String): ValidationNel[IllegalUserFailure, UserId] =
Option(userId).map(_.trim).filter(!_.isEmpty).map(userId => new UserId(userId)).toSuccess(NonEmptyList[IllegalUserFailure](EmptyId))
}
At runtime, I got this error:
Execution exception[[RuntimeException: org.codehaus.jackson.map.JsonMappingException: No serializer found for class com.myApp.domain.model.user.UserId and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: java.util.HashMap["value"])]]
Then, I came across this little article on the web, explaining a solution.
I ended up with this User class:
#NodeEntity
#JsonAutoDetect(Array(JsonMethod.NONE))
class User (#Indexed #JsonProperty var id: UserId)
I also tried to put the #JsonProperty on the UserId value object itself like this:
JsonAutoDetect(Array(JsonMethod.NONE))
final case class UserId(#JsonProperty value: String) {
override def toString = value
}
but I still get exactly the same error.
Did someone using Scala already had this issue?

I think your problem is that case classes don't generate the JavaBean boilerplate (or member fields annotated appropriately) Jackson expects. For example, I believe Scala generates this method in UserId:
public java.lang.String value();
Jackson doesn't know what to do with that. It isn't a recognizable field or a JavaBean-style method (i.e. getValue() or setValue()).
I haven't yet used it, but you might want to try jackson-module-scala as a more Scala-aware wrapper around Jackson. Another option is spray-json.

The reason for the error is that the version of Jackson that you appear to be using (1.x) is not matching up the "value" property to the constructor argument. When applied to constructors, #JsonProperty usually requires a name parameter to match up parameters to properties; with your current setup, I believe the following would work:
case class UserId #JsonCreator() (#JsonProperty("value") value: String)
The Jackson Scala Module also provides more support for Scala-isms, and might possibly handle the UserId class without any Jackson-specific annotations. That said, your version of Jackson is quite old (the current latest version is 2.3.1), and upgrading might not be trivial for your configuration.

Related

Java Object serialization in scala

Pardon me as I am new to Scala.
I have created a case class which encapsultes some information. One of the objects i want to take in for that is of JavaClass. As i am using in spark, i would need it to be serializable. How can i do that?
                              
Java class
public class Currency {
public Currency(final BigDecimal amount, final CurrencyUnit unit) {
//Doing Something
}
}
case class ReconEntity(inputCurrency : Currency, outputCurrency : Currency)
Using implicit i want to have my serialization code for Currency so that spark can work on ReconEntity.
Firstly, have you tried some RDD operations using your Currency and ReconEntity classes? Do you actually get an error? Spark is able to handle RDD operations with apparently non-serializable Scala classes as values, at least (you can try this in the spark-shell, though possibly this might require the Kryo serializer to be enabled).
Since you state that you don't own the Currency class, you can't add extends Serializable, which would be the simplest solution.
Another approach is to wrap the class with a serializable wrapper, as described in this article: Beating Serialization in Spark - example code copied here for convenience:
For simple classes, it is easiest to make a wrapper interface that
extends Serializable. This means that even though UnserializableObject
cannot be serialized we can pass in the following object without any
issue
public interface UnserializableWrapper extends Serializable {
public UnserializableObject create(String prm1, String prm2);
}
The object can then be passed into an RDD or Map function using the
following approach
UnserializableWrapper usw = new UnserializableWrapper() {
public UnserializableObject create(String prm1, String prm2) {
return new UnserializableObject(prm1,prm2);
} }
If the class is merely a data structure, without significant methods, then it might be easier to unpack its fields into your RDD types (in your case, ReconEntity) and discard the class itself.
If the class has methods that you need, then your other (ugly) option is to cut-and-paste code into a new serializable class or into helper functions in your Spark code.

Bypass Scala Type Erasure (with Guava EventBus)

I am using Guava's EventBus in my Scala project.
I have a parameterized event like so:
class MyEvent[T]
And a simple event listener:
class MyEventListener {
#Subscribe
def onStringEvent(event: MyEvent[String]) {
println("String event caught")
}
#Subscribe
def onIntEvent(event: MyEvent[Int]) {
println("Int event caught")
}
}
I can create my com.google.common.eventbus.EventBus, register MyEventListener, and fire an event:
val eventBus = new EventBus
eventBus.register(new MyEventListener)
eventBus.post(new MyEvent[String])
But, as you may have guessed already, both onStringEvent and onIntEvent get called as a result. The issue is that Java's/Scala's type erasure drops off the parameter type at runtime and both subscriptions appear to Guava as event: MyEvent.
Ok, my question:
Due to erasure, using the same Event object for different types of Guava events in this manner wouldn't be possible in Java and isn't possible in Scala. However, Scala proves to have a number of nice ways to circumvent Java's erasure problems. Does anybody see another way to achieve this, perhaps using some Scala wizardry?
The problem is in Guava: it cannot see the type parameter, and so it will not distinguish between the two methods. The only possible solution is to create a new class for each type.
That can be really easy:
class MyEvent[T] protected () { /* Your methods here */ }
class MyEventInt extends MyEvent[Int] {}
class MyEventString extends MyEvent[String] {}
and then whenever you need to do anything in your code, just use MyEvent[Int]. But Guava will require at least this much boilerplate.
Note that I've made the MyEvent[T] constructor protected so you have to instantiate one of the de-generified classes. I'm not sure whether that will work for your use-case; I'll assume so. You can get around that also (with type classes), but it adds more boilerplate.

Play 2.0 models best practices

I am looking for best practices regarding models and ways to persist objects in database with play 2.0. I have studied the Play and typesafe samples for play 2.0 using scala.
What I understand is :
The model is defined in a case class
All the insert/update/delete/select are defined in the companion object of this case class
So if I need to update my Car object to define a new owner i will have to do:
val updatedCar = myCar.copy(owner=newOwner)
Car.update(updatedCar)
// or
Car.updateOwner(myCar.id.get, newOwner)
I am wondering why the update or delete statements are not in the case class itself:
case class Car(id: Pk[Long] = NotAssigned, owner: String) {
def updateOwner(newOwner: String) {
DB.withConnection { implicit connection =>
SQL(
"""
update car
set owner = {newOwner}
where id = {id}
"""
).on(
'id -> id,
'newOwner -> newOwner
).executeUpdate()
}
copy(owner = newOwner)
}
}
Doing so would permit to do:
val updatedCar = myCar.updateOwner(newOwner)
Which is what I used to do with Play 1.X using Java and JPA.
Maybe the reason is obvious and due to my small knowledge of Scala.
I think part of the reason is the favoring of immutability in functional languages like Scala.
In your example, you modify 'this.owner'. What's your equivalent operation look like for a delete, and what happens to "this"?
With a companion object, it seems a bit more clear that the passed object (or ID) is not modified, and the returned object or ID is the relevant result of the operation.
Then also, I think another part of the issue is that your example requires an instance first. When you delete an Object, what if you just want to delete by Id you got off a form, and don't want to first build a whole instance of the object you intend to delete?
I've been playing with play2.0 with mongo, and my companion objects look like:
object MyObject extends SalatDAO[MyObject,ObjectId] (collection = getCollection("objectcollection")) {
}
These companion objects inherit CRUD like operations from SalatDAO (MyObject.save(), MyObject.find(), etc). I'm not entirely clear on how it is implemented internally, but it works nicely.

Serialize Function1 to database

I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.

How do I implement configurations and settings?

I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default.
So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live.
Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples.
The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.)
What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?
How about using Mixin composition? Since traits are applied from right to left we can:
Define traits:
default property:
trait PropertyA {
val userName = "default"
def useUserName() : Unit = {
println(userName)
}
}
some other property:
trait SomePropertyA extends PropertyA {
val userName = "non-default"
def useUserName() : Unit = {
println(userName)
}
}
Define an abstract class with default property:
trait HasPropertyA {
val prop : PropertyA = new PropertyA
}
Define an abstract class with non - default property:
trait HasSomeOtherPropertyA extends HasPropertyA {
override val prop:PropertyA = new SomePropertyA {}
}
In your class use default one:
trait MyClass extends PropertyA {
doSomethingWith(prop.userName)
}
or in the other situation mix it with some other property:
if(env.isSome) {
val otherProps = MyClass with HasSomeOtherPropertyA
doSomethingWith(prop.userName)// userName == non-default!
}
Read in more detail in the paper Scalable Component Abstractions
Although Lift is a web framework in its essence, it has some utilities as well. One of them is Dependency Injection, see : http://simply.liftweb.net/index-8.2.html#toc-Section-8.2. So you can for example create a base trait with default values and then subclass the Runtime, Development, Test... environment values. And I think that it's easy for someone without knowledge of scala to put some override def defaultValue = "new value" in a file.