Is it possible to apply a single annotation to multiple use-site targets in Kotlin? - annotations

According to the documentation: https://kotlinlang.org/docs/reference/annotations.html
You can apply multiple annotations to a single site-use target, but is there a way to apply the same annotation to multiple site-use targets?
My use-case is decorating classes with annotations for SimpleXML. To use an immutable data class, you have to annotate both the field and the constructor parameter:
data class Data(
#field:Element(name = "ID")
#param:Element(name = "ID")
val id: Int,
#param:Element(name = "TEXT")
#field:Element(name = "TEXT")
val text: String)
For data classes with many fields, you can easily end up with 3x as many annotations as actual code, and it would be nice to eliminate the duplication. This is especially annoying when you have to use a complicated annotation like ElementUnion which can be multiple lines on its own.

Unfortunately, as of Kotlin 1.3, there's no syntax for this use case

Related

Sangria GraphQL: How to mix deferred fields, deriveObjectType, and case classes

I'm curious if it's possible to define a case class's field as deferred while still using the deriveObjectType macro to define everything else.
Here's an example. Dashboards contain a sequence of tabs:
case class Tab(id: Long, dashboardId: Long, name: String, order: Long)
case class Dashboard(id: Long, name: String, tabs: Seq[Tab])
I'm deferring resolution of the Dashboard.tabs field using a Fetcher, AND I'd like to continue using the deriveObjectType macro (if possible). So here's how I've defined my ObjectTypes:
val TabType = deriveObjectType[Unit, Dashboard]()
val DashboardType = deriveObjectType[Unit, Dashboard](
AddFields(
fields =
Field(
name = "tabs",
fieldType = ListType(TabType),
resolve = ctx => {
TabsFetcher.fetcher.defer(ctx.value.id)
}
)
)
)
But, when I run the code, I get the following error:
sangria.schema.NonUniqueFieldsError: All fields within 'Dashboard' type should have unique names! Non-unique fields: 'tabs'.
If I remove the tabs field from the Dashboard case class the error goes away, but I lose some of the benefit of using a case class (especially in unit tests). And if I avoid the use of the deriveObjectType macro (and define the Dashboard's ObjectType manually), then I lose the benefits of the macro (less boilerplate).
So, I'm curious if there's a better way, or another way, around this issue short of defining the DashboardType without the use of the macro or removing the tags field from the Dashboard case class.
(I was hoping that there might be some sort of #GraphQLDeferred annotation that I could apply to the Dashboard.tabs field or something similar???)
You almost got it right. You need use ReplaceField instead of AddFields. Alternatively you can ExcludeFields("tabs") and continue using AddFields.

Assign single element to an IntArray value attribute in Kotlin annotations

I have a custom annotation declared as
#Target(AnnotationTarget.FUNCTION)
annotation class Anno(val value: IntArray, val attr2: Int = 0)
For a single element declaration, I'm able to use the above annotation in a Java class as
#Anno(1)
but while writing the same in a Kotlin class, I have to put enclosing brackets
#Anno([1])
Aren't the brackets unnecessary in this case or am I declaring the annotation wrong? I am using Kotlin version 1.2.0-rc-39
Yes, square brackets (Kotlin 1.2+) or arrayOf (Kotlin 1.2-) are required.
But as long as this is your annotation, written in Kotlin, you can do fancy things with it, like taking lambdas or varargs, so you can try to tune the resulting syntax for your need. For example, this will be valid syntax, even in Kotlin 1.2-:
#Target(AnnotationTarget.FUNCTION)
annotation class Anno(
val attribute: String,
vararg val value: Int
)
#Anno("test", 1, 2, 3)
fun test() = 42
You'll need to put vararg parameter at the end.

Pre-process parameters of a case class constructor without repeating the argument list

I have this case class with a lot of parameters:
case class Document(id:String, title:String, ...12 more params.. , keywords: Seq[String])
For certain parameters, I need to do some string cleanup (trim, etc) before creating the object.
I know I could add a companion object with an apply function, but the LAST thing I want is to write the list of parameters TWICE in my code (case class constructor and companion object's apply).
Does Scala provide anything to help me on this?
My general recommendations would be:
Your goal (data preprocessing) is the perfect use case of a companion object -- so it is maybe the most idiomatic solution despite the boilerplate.
If the number of case class parameters is high the builder pattern definitely helps, since you do not have to remember the order of the parameters and your IDE can help you with calling the builder member functions. Using named arguments for the case class constructor allows you to use a random argument order as well but, to my knowledge, there is not IDE autocompletion for named arguments => makes a builder class slightly more convenient. However using a builder class raises the question of how to deal with enforcing the specification of certain arguments -- the simple solution may cause runtime errors; the type-safe solution is a bit more verbose. In this regard a case class with default arguments is more elegant.
There is also this solution: Introduce an additional flag preprocessed with a default argument of false. Whenever you want to use an instance val d: Document, you call d.preprocess() implemented via the case class copy method (to avoid ever typing all your arguments again):
case class Document(id: String, title: String, keywords: Seq[String], preprocessed: Boolean = false) {
def preprocess() = if (preprocessed) this else {
this.copy(title = title.trim, preprocessed = true) // or whatever you want to do
}
}
But: You cannot prevent a client to initialize preprocessed set to true.
Another option would be to make some of your parameters a private val and expose the corresponding getter for the preprocessed data:
case class Document(id: String, title: String, private val _keywords: Seq[String]) {
val keywords = _keywords.map(kw => kw.trim)
}
But: Pattern matching and the default toString implementation will not give you quite what you want...
After changing context for half an hour, I looked at this problem with fresh eyes and came up with this:
case class Document(id: String, title: String, var keywords: Seq[String]) {
keywords = keywords.map(kw => kw.trim)
}
I simply make the argument mutable adding var and cleanup data in the class body.
Ok I know, my data is not immutable anymore and Martin Odersky will probably kill a kitten after seeing this, but hey.. I managed to do what I want adding 3 characters. I call this a win :)

Is there something like AutoMapper for Scala?

I have been looking for some scala fluent API for mapping object-object, similar to AutoMapper.
Are there such tools in Scala?
I think there's less need of something like AutoMapper in Scala, because if you use idiomatic Scala models are easier to write and manipulate and because you can define easily automatic flattening/projection using implicit conversions.
For example here is the equivalent in Scala of AutoMapper flattening example:
// The full model
case class Order( customer: Customer, items: List[OrderLineItem]=List()) {
def addItem( product: Product, quantity: Int ) =
copy( items = OrderLineItem(product,quantity)::items )
def total = items.foldLeft(0.0){ _ + _.total }
}
case class Product( name: String, price: Double )
case class OrderLineItem( product: Product, quantity: Int ) {
def total = quantity * product.price
}
case class Customer( name: String )
case class OrderDto( customerName: String, total: Double )
// The flattening conversion
object Mappings {
implicit def order2OrderDto( order: Order ) =
OrderDto( order.customer.name, order.total )
}
//A working example
import Mappings._
val customer = Customer( "George Costanza" )
val bosco = Product( "Bosco", 4.99 )
val order = Order( customer ).addItem( bosco, 15 )
val dto: OrderDto = order // automatic conversion at compile-time !
println( dto ) // prints: OrderDto(George Costanza,74.85000000000001)
PS: I should not use Double for money amounts...
I agree with #paradigmatic, it's true that the code will be much cleaner using Scala, but sometimes you can find yourself mapping between case classes that look very similar, and that's just a waste of keystrokes.
I've started working on a project to address the issues, you can find it here: https://github.com/bfil/scala-automapper
It uses macros to generate the mappings for you.
At the moment it can map a case class to a subset of the original case class, it handles optionals, and optional fields as well as other minor things.
I'm still trying to figure out how to design the api to support renaming or mapping specific fields with custom logic, any idea or input on that would be very helpful.
It can be used for some simple cases right now, and of course if the mapping gets very complex it might just be better defining the mapping manually.
The library also allows to manually define Mapping types between case classes in any case that can be provided as an implicit parameter to a AutoMapping.map(sourceClass) or sourceClass.mapTo[TargetClass] method.
UPDATE
I've just released a new version that handles Iterables, Maps and allows to pass in dynamic mappings (to support renaming and custom logic for example)
For complex mappings one may want to consider Java based mappers like
http://modelmapper.org/user-manual/property-mapping/#conditional-mapping
http://github.com/smooks/smooks/tree/v1.5.1/smooks-examples/model-driven-basic-virtual/
http://orika-mapper.github.io/orika-docs/advanced-mappings.html
Scala objects can be accessed from Java:
http://lampwww.epfl.ch/~michelou/scala/using-scala-from-java.html
http://lampwww.epfl.ch/~michelou/android/java-to-scala.html
Implementations of implicit conversions for complex objects would be smoother with declarative mappings than handcrafted ones.
Found a longer list here:
http://www.javacodegeeks.com/2013/10/java-object-to-object-mapper.html

scala: map-like structure that doesn't require casting when fetching a value?

I'm writing a data structure that converts the results of a database query. The raw structure is a java ResultSet and it would be converted to a map or class which permits accessing different fields on that data structure by either a named method call or passing a string into apply(). Clearly different values may have different types. In order to reduce burden on the clients of this data structure, my preference is that one not need to cast the values of the data structure but the value fetched still has the correct type.
For example, suppose I'm doing a query that fetches two column values, one an Int, the other a String. The result then names of the columns are "a" and "b" respectively. Some ideal syntax might be the following:
val javaResultSet = dbQuery("select a, b from table limit 1")
// with ResultSet, particular values can be accessed like this:
val a = javaResultSet.getInt("a")
val b = javaResultSet.getString("b")
// but this syntax is undesirable.
// since I want to convert this to a single data structure,
// the preferred syntax might look something like this:
val newStructure = toDataStructure[Int, String](javaResultSet)("a", "b")
// that is, I'm willing to state the types during the instantiation
// of such a data structure.
// then,
val a: Int = newStructure("a") // OR
val a: Int = newStructure.a
// in both cases, "val a" does not require asInstanceOf[Int].
I've been trying to determine what sort of data structure might allow this and I could not figure out a way around the casting.
The other requirement is obviously that I would like to define a single data structure used for all db queries. I realize I could easily define a case class or similar per call and that solves the typing issue, but such a solution does not scale well when many db queries are being written. I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Anyone have any suggestions? Thanks!
To do this without casting, one needs more information about the query and one needs that information at compiole time.
I suspect some people are going to propose using some sort of ORM, but let us assume for my case that it is preferred to maintain the query in the form of a string.
Your suspicion is right and you will not get around this. If current ORMs or DSLs like squeryl don't suit your fancy, you can create your own one. But I doubt you will be able to use query strings.
The basic problem is that you don't know how many columns there will be in any given query, and so you don't know how many type parameters the data structure should have and it's not possible to abstract over the number of type parameters.
There is however, a data structure that exists in different variants for different numbers of type parameters: the tuple. (E.g. Tuple2, Tuple3 etc.) You could define parameterized mapping functions for different numbers of parameters that returns tuples like this:
def toDataStructure2[T1, T2](rs: ResultSet)(c1: String, c2: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2])
def toDataStructure3[T1, T2, T3](rs: ResultSet)(c1: String, c2: String, c3: String) =
(rs.getObject(c1).asInstanceOf[T1],
rs.getObject(c2).asInstanceOf[T2],
rs.getObject(c3).asInstanceOf[T3])
You would have to define these for as many columns you expect to have in your tables (max 22).
This depends of course on that using getObject and casting it to a given type is safe.
In your example you could use the resulting tuple as follows:
val (a, b) = toDataStructure2[Int, String](javaResultSet)("a", "b")
if you decide to go the route of heterogeneous collections, there are some very interesting posts on heterogeneous typed lists:
one for instance is
http://jnordenberg.blogspot.com/2008/08/hlist-in-scala.html
http://jnordenberg.blogspot.com/2008/09/hlist-in-scala-revisited-or-scala.html
with an implementation at
http://www.assembla.com/wiki/show/metascala
a second great series of posts starts with
http://apocalisp.wordpress.com/2010/07/06/type-level-programming-in-scala-part-6a-heterogeneous-list%C2%A0basics/
the series continues with parts "b,c,d" linked from part a
finally, there is a talk by Daniel Spiewak which touches on HOMaps
http://vimeo.com/13518456
so all this to say that perhaps you can build you solution from these ideas. sorry that i don't have a specific example, but i admit i haven't tried these out yet myself!
Joschua Bloch has introduced a heterogeneous collection, which can be written in Java. I once adopted it a little. It now works as a value register. It is basically a wrapper around two maps. Here is the code and this is how you can use it. But this is just FYI, since you are interested in a Scala solution.
In Scala I would start by playing with Tuples. Tuples are kinda heterogeneous collections. The results can be, but not have to be accessed through fields like _1, _2, _3 and so on. But you don't want that, you want names. This is how you can assign names to those:
scala> val tuple = (1, "word")
tuple: ([Int], [String]) = (1, word)
scala> val (a, b) = tuple
a: Int = 1
b: String = word
So as mentioned before I would try to build a ResultSetWrapper around tuples.
If you want "extract the column value by name" on a plain bean instance, you can probably:
use reflects and CASTs, which you(and me) don't like.
use a ResultSetToJavaBeanMapper provided by most ORM libraries, which is a little heavy and coupled.
write a scala compiler plugin, which is too complex to control.
so, I guess a lightweight ORM with following features may satisfy you:
support raw SQL
support a lightweight,declarative and adaptive ResultSetToJavaBeanMapper
nothing else.
I made an experimental project on that idea, but note it's still an ORM, and I just think it may be useful to you, or can bring you some hint.
Usage:
declare the model:
//declare DB schema
trait UserDef extends TableDef {
var name = property[String]("name", title = Some("姓名"))
var age1 = property[Int]("age", primary = true)
}
//declare model, and it mixes in properties as {var name = ""}
#BeanInfo class User extends Model with UserDef
//declare a object.
//it mixes in properties as {var name = Property[String]("name") }
//and, object User is a Mapper[User], thus, it can translate ResultSet to a User instance.
object `package`{
#BeanInfo implicit object User extends Table[User]("users") with UserDef
}
then call raw sql, the implicit Mapper[User] works for you:
val users = SQL("select name, age from users").all[User]
users.foreach{user => println(user.name)}
or even build a type safe query:
val users = User.q.where(User.age > 20).where(User.name like "%liu%").all[User]
for more, see unit test:
https://github.com/liusong1111/soupy-orm/blob/master/src/test/scala/mapper/SoupyMapperSpec.scala
project home:
https://github.com/liusong1111/soupy-orm
It uses "abstract Type" and "implicit" heavily to make the magic happen, and you can check source code of TableDef, Table, Model for detail.
Several million years ago I wrote an example showing how to use Scala's type system to push and pull values from a ResultSet. Check it out; it matches up with what you want to do fairly closely.
implicit val conn = connect("jdbc:h2:f2", "sa", "");
implicit val s: Statement = conn << setup;
val insertPerson = conn prepareStatement "insert into person(type, name) values(?, ?)";
for (val name <- names)
insertPerson<<rnd.nextInt(10)<<name<<!;
for (val person <- query("select * from person", rs => Person(rs,rs,rs)))
println(person.toXML);
for (val person <- "select * from person" <<! (rs => Person(rs,rs,rs)))
println(person.toXML);
Primitives types are used to guide the Scala compiler into selecting the right functions on the ResultSet.