I am trying to use Spring to write a document in a MongoDB, and I am getting a org.springframework.data.mapping.MappingException: Ambiguous field mapping detected!
The problem is this ambiguity comes from a compiled class which inherits from another compiled class, so I can not use #Field annotation to change the field name manually.
Is there any way to tell Spring how to resolve ambiguous field mappings without modifying the classes' code?
The class I am trying to persist looks like this:
data class BehaviouralEvent(
val sources: Set<BehaviouralEvent>,
override val activity: Activity,
override val start: Instant = Instant.now(),
override val end: Instant = Instant.now(),
override val lifecycle: Lifecycle = Lifecycle.UNKNOWN
) : Event(activity, start, end, lifecycle) {
constructor(
sources: Set<BehaviouralEvent>,
activityID: String,
start: Instant = Instant.now(),
end: Instant = Instant.now(),
lifecycle: Lifecycle = Lifecycle.UNKNOWN
) : this(sources, Activity.from(activityID), start, end, lifecycle)
constructor(
sources: Set<BehaviouralEvent>,
event: Event
) : this(sources, event.activity, event.start, event.end, event.lifecycle)
}
When I try to persist a document with this structure (with a MongoRepository<BehaviouralEvent, String>) I get an ambiguous field mapping for all the overridden attributes (activity, start, end and lifecycle).
Appreciate any ideas or workarounds.
tl;dr At the time of writing there is no way around this issue.
The mapping layer currently has no means to tell which of the properties should be persisted if one "shadows" the other. Therefore we have this check in the entity metadata.
Now, if you tried to relax the unique field check in BasicMongoPersistentEntity a bit for Kotlin types by adding someting like
if(isKotlinType(property.getOwner()) && !propety.hasGetter()) {
return;
}
the repository will no longer complain at creation time. However, the mapping layer still needs to determine which of the presented properties to persist as depending on inspection order, they still override one another and it's likely one ends up with the wrong state persisted. Especially when overriding val with var.
I've opened DATAMONGO-2250 to investigate further and see if there's something we can do about it.
Related
I have a complex project which reads configurations from a DB through the object ConfigAccessor which implements two basic APIs: getConfig(name: String) and storeConfig(c: Config).
Due to how the project is currently designed, almost every component needs to use the ConfigAccessor to talk with the DB. Thus, being this component an object it is easy to just import it and call its static methods.
Now I am trying to build some unit tests for the project in which the configurations are stored in a in-memory hashMap. So, first of all I decoupled the config accessor logic from its storage (using the cake pattern). In this way I can define my own ConfigDbComponent while testing
class ConfigAccessor {
this: ConfigDbComponent =>
...
The "problem" is that now ConfigAccessor is a class, which means I have to instantiate it at the beginning of my application and pass it everywhere to whoever needs it. The first way I can think of for passing this instance around would be through other components constructors. This would become quite verbose (adding a parameter to every constructor in the project).
What do you suggest me to do? Is there a way to use some design pattern to overcome this verbosity or some external mocking library would be more suitable for this?
Yes, the "right" way is passing it in constructors. You can reduce verbosity by providing a default argument:
class Foo(config: ConfigAccessor = ConfigAccessor) { ... }
There are some "dependency injection" frameworks, like guice or spring, built around this, but I won't go there, because I am not a fan.
You could also continue utilizing the cake pattern:
trait Configuration {
def config: ConfigAccessor
}
trait Foo { self: Configuration => ... }
class FooProd extends Foo with ProConfig
class FooTest extends Foo with TestConfig
Alternatively, use the "static setter". It minimizes changes to existing code, but requires mutable state, which is really frowned upon in scala:
object Config extends ConfigAccessor {
#volatile private var accessor: ConfigAccessor = _
def configurate(cfg: ConfigAccessor) = synchronized {
val old = accessor
accessor = cfg
old
}
def getConfig(c: String) = Option(accessor).fold(
throw new IllegalStateException("Not configurated!")
)(_.getConfig(c))
You can retain a global ConfigAccessor and allow selectable accessors like this:
object ConfigAccessor {
private lazy val accessor = GetConfigAccessor()
def getConfig(name: String) = accessor.getConfig(name)
...
}
For production builds you can put logic in GetConfigAccessor to select the appropriate accessor based on some global config such as typesafe config.
For unit testing you can have a different version of GetConfigAccessor for different test builds which return the appropriate test implementation.
Making this value lazy allows you to control the order of initialisation and if necessary do some non-functional mutable stuff in the initialisation code before creating the components.
Update following comments
The production code would have an implementation of GetConfigAccessor something like this:
object GetConfigAccessor {
private val useAws = System.getProperties.getProperty("accessor.aws") == "true"
def apply(): ConfigAccessor =
if (useAws) {
return new AwsConfigAccessor
} else {
return new PostgresConfigAccessor
}
}
Both AwsConfigAccessor and PostgresConfigAccessor would have their own unit tests to prove that they conform to the correct behaviour. The appropriate accessor can be selected at runtime by setting the appropriate system property.
For unit testing there would be a simpler implementation of GetConfigAccessor, something like this:
def GetConfigAccessor() = new MockConfigAccessor
Unit testing is done within a unit testing framework which contains a number of libraries and mock objects that are not part of the production code. These are built separately and are not compiled into the final product. So this version of GetConfigAccessor would be part of that unit testing code and would not be part of the final product.
Having said all that, I would only use this model for reading static configuration data because that keeps the code functional. The ConfigAccessor is just a convenient way to access global constants without having them passed down in the constructor.
If you are also writing data then this is more like a real DB than a configuration. In that case I would create custom accessors for each component that give access to different parts of the DB. That way it is clear which parts of the data are updated by each component. These accessors would be passed down to the component and can then be unit tested with the appropriate mock implementation as normal.
You may need to partition your data into static config and dynamic config and handle them separately.
I understand from both personal experience and this discussion that when a data class inherits from another class that inherited class's fields are not included in the data class's copy function.
I'm interested in what the options are for getting around this issue.
Specifically, I have a JPA #MappedSuperClass for my JPA entities, which are data classes. In the super class I set up the entity ID, which (at least so far) I always want to do the same way. There are some other things I may want to do in it as well, like set up a created date, last updated date, etc.
Options I've considered so far:
Copy paste the ID, created date, etc. into every entity. Pros: it's easy and copy method works. Cons: Fails DRY and you can't handle all entities using a shared super class. (But could create an interface for that.)
Make override the super class's values and pass them to the super class.
You still need to copy paste the override values into every entity, but at least you don't have to copy the annotations.
#Entity
data class Comment(
#Lob
comment: String,
override val id: Long = -1
) : BaseEntity(id)
#MappedSuperclass
abstract class BaseEntity(
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
open val id: Long = -1
)
??? I can't even think of a third option that works. Is there another way to do it? Make ID a var and create a custom copy method every time? That sounds ugly.
I'm fairly certain, that because of type-erasure you aren't going to be able to accomplish this with the types of classes you have defined. Because your data classes are extending an abstract class is where you're going to run into many road blocks.
The easiest way to have it both ways still requires a bit of work, and there are inherent drawbacks:
fun <T: BaseEntity> T.withBase(base: BaseEntity): T {
id = base.id
return this
}
This is an easy extension method that would live next to the BaseEntity class definition, and you would simply chain that call after .copy(). So you would use it as follows:
val base = Comment("Created an object")
val copy = base.copy().withBase(base)
Caveats:
This will screwup your generated values, because the copy() call will instantiate the BaseEntity
You have to remember to chain those calls.
If you want the id to increment (and any #AutoGenerated value) when you copy, then the first caveat disappears. But, the chaining is still required, however, it does massively reduce copy/paste, and other possible errors.
Neo4j server provides a REST api dealing with Json format.
I use spring-data-neo4j to map a domain object (in Scala) to a neo4j node easily.
Here's an example of my User node:
#NodeEntity
class User(#Indexed #JsonProperty var id: UserId)
UserId being a value object:
final case class UserId(value: String) {
override def toString = value
}
object UserId {
def validation(userId: String): ValidationNel[IllegalUserFailure, UserId] =
Option(userId).map(_.trim).filter(!_.isEmpty).map(userId => new UserId(userId)).toSuccess(NonEmptyList[IllegalUserFailure](EmptyId))
}
At runtime, I got this error:
Execution exception[[RuntimeException: org.codehaus.jackson.map.JsonMappingException: No serializer found for class com.myApp.domain.model.user.UserId and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS) ) (through reference chain: java.util.HashMap["value"])]]
Then, I came across this little article on the web, explaining a solution.
I ended up with this User class:
#NodeEntity
#JsonAutoDetect(Array(JsonMethod.NONE))
class User (#Indexed #JsonProperty var id: UserId)
I also tried to put the #JsonProperty on the UserId value object itself like this:
JsonAutoDetect(Array(JsonMethod.NONE))
final case class UserId(#JsonProperty value: String) {
override def toString = value
}
but I still get exactly the same error.
Did someone using Scala already had this issue?
I think your problem is that case classes don't generate the JavaBean boilerplate (or member fields annotated appropriately) Jackson expects. For example, I believe Scala generates this method in UserId:
public java.lang.String value();
Jackson doesn't know what to do with that. It isn't a recognizable field or a JavaBean-style method (i.e. getValue() or setValue()).
I haven't yet used it, but you might want to try jackson-module-scala as a more Scala-aware wrapper around Jackson. Another option is spray-json.
The reason for the error is that the version of Jackson that you appear to be using (1.x) is not matching up the "value" property to the constructor argument. When applied to constructors, #JsonProperty usually requires a name parameter to match up parameters to properties; with your current setup, I believe the following would work:
case class UserId #JsonCreator() (#JsonProperty("value") value: String)
The Jackson Scala Module also provides more support for Scala-isms, and might possibly handle the UserId class without any Jackson-specific annotations. That said, your version of Jackson is quite old (the current latest version is 2.3.1), and upgrading might not be trivial for your configuration.
I know it's not directly possible to serialize a function/anonymous class to the database but what are the alternatives? Do you know any useful approach to this?
To present my situation: I want to award a user "badges" based on his scores. So I have different types of badges that can be easily defined by extending this class:
class BadgeType(id:Long, name:String, detector:Function1[List[UserScore],Boolean])
The detector member is a function that walks the list of scores and return true if the User qualifies for a badge of this type.
The problem is that each time I want to add/edit/modify a badge type I need to edit the source code, recompile the whole thing and re-deploy the server. It would be much more useful if I could persist all BadgeType instances to a database. But how to do that?
The only thing that comes to mind is to have the body of the function as a script (ex: Groovy) that is evaluated at runtime.
Another approach (that does not involve a database) might be to have each badge type into a jar that I can somehow hot-deploy at runtime, which I guess is how a plugin-system might work.
What do you think?
My very brief advice is that if you want this to be truly data-driven, you need to implement a rules DSL and an interpreter. The rules are what get saved to the database, and the interpreter takes a rule instance and evaluates it against some context.
But that's overkill most of the time. You're better off having a little snippet of actual Scala code that implements the rule for each badge, give them unique IDs, then store the IDs in the database.
e.g.:
trait BadgeEval extends Function1[User,Boolean] {
def badgeId: Int
}
object Badge1234 extends BadgeEval {
def badgeId = 1234
def apply(user: User) = {
user.isSufficientlyAwesome // && ...
}
}
You can either have a big whitelist of BadgeEval instances:
val weDontNeedNoStinkingBadges = Map(
1234 -> Badge1234,
5678 -> Badge5678,
// ...
}
def evaluator(id: Int): Option[BadgeEval] = weDontNeedNoStinkingBadges.get(id)
def doesUserGetBadge(user: User, id: Int) = evaluator(id).map(_(user)).getOrElse(false)
... or if you want to keep them decoupled, use reflection:
def badgeEvalClass(id: Int) = Class.forName("com.example.badge.Badge" + id + "$").asInstanceOf[Class[BadgeEval]]
... and if you're interested in runtime pluggability, try the service provider pattern.
You can try and use Scala Continuations - they can give you the ability to serialize the computation and run it at later time or even on another machine.
Some links:
Continuations
What are Scala continuations and why use them?
Swarm - Concurrency with Scala Continuations
Serialization relates to data rather than methods. You cannot serialize functionality because it is a class file which is designed to serialize that and object serialization serializes the fields of an object.
So like Alex says, you need a rule engine.
Try this one if you want something fairly simple, which is string based, so you can serialize the rules as strings in a database or file:
http://blog.maxant.co.uk/pebble/2011/11/12/1321129560000.html
Using a DSL has the same problems unless you interpret or compile the code at runtime.
I'm writing a system that is deployed in several places and each site needs its own configurations and settings. A "configuration" is a named value that is necessary to a particular site (e.g., the database URL, S3 bucket name); every configuration is necessary, there is not usually a default, and it's typically string-valued. A setting is a named value but it just tweaks the behavior of the system; it's often numeric or Boolean, and there's usually some default.
So far, I've been using property files or thing like them, but it's a terrible solution. Several times, a developer has added a requirement for a configuration but not added the value to file for the live configuration, so the new release passed all the tests, then failed when released to live.
Better, of course, for every file to be compiled — so if there's a missing configuration, or one of the wrong type, it won't get past the compiler — and inject the site-specific class into the build for each site. As a bones, a Scala file can easy model more complex values, especially lists, but also maps and tuples.
The downside is, the files are sometimes maintained by people who aren't developers, so it has to be pretty self-explanatory, which was the advantage of property files. (Someone explain XML configurations to me: all the complexity of a compilable file but the run-time risk of a property file.)
What I'm looking for is an easy pattern for defining a group required names and allowable values. Any suggestions?
How about using Mixin composition? Since traits are applied from right to left we can:
Define traits:
default property:
trait PropertyA {
val userName = "default"
def useUserName() : Unit = {
println(userName)
}
}
some other property:
trait SomePropertyA extends PropertyA {
val userName = "non-default"
def useUserName() : Unit = {
println(userName)
}
}
Define an abstract class with default property:
trait HasPropertyA {
val prop : PropertyA = new PropertyA
}
Define an abstract class with non - default property:
trait HasSomeOtherPropertyA extends HasPropertyA {
override val prop:PropertyA = new SomePropertyA {}
}
In your class use default one:
trait MyClass extends PropertyA {
doSomethingWith(prop.userName)
}
or in the other situation mix it with some other property:
if(env.isSome) {
val otherProps = MyClass with HasSomeOtherPropertyA
doSomethingWith(prop.userName)// userName == non-default!
}
Read in more detail in the paper Scalable Component Abstractions
Although Lift is a web framework in its essence, it has some utilities as well. One of them is Dependency Injection, see : http://simply.liftweb.net/index-8.2.html#toc-Section-8.2. So you can for example create a base trait with default values and then subclass the Runtime, Development, Test... environment values. And I think that it's easy for someone without knowledge of scala to put some override def defaultValue = "new value" in a file.