I have a class with a constructor parameter like this:
#Transient val applicationType: Option[String] = None,
However, Squeryl doesn't notice the #Transient annotation, and still tries to read the value of this field from the database. But there is no such field in the database.
My investigations so far have showed that, as I suspected, Squeryl only looks at the method and field annotations, whereas the annotation is only placed by the Scala compiler on the argument of the constructor (I can see this with javap).
So how can I fix this?
The class is not a case class because I'm extending a case class, and case classes shouldn't extend other case classes.
You can also tell scalac that you want the annotation to appear on the field. See this answer for the proper syntax.
Just change the constructor argument to a plain one:
_applicationType: Option[String] = None,
and introduce the val separately
#Transient val applicationType = _applicationType
Related
I have always seen that, when we are using a map function, we can create a dataframe from rdd using case class like below:-
case class filematches(
row_num:Long,
matches:Long,
non_matches:Long,
non_match_column_desc:Array[String]
)
newrdd1.map(x=> filematches(x._1,x._2,x._3,x._4)).toDF()
This works great as we all know!!
I was wondering , why we specifically need case classes here?
We should be able to achieve same effect using normal classes with parameterized constructors (as they will be vals and not private):-
class filematches1(
val row_num:Long,
val matches:Long,
val non_matches:Long,
val non_match_column_desc:Array[String]
)
newrdd1.map(x=> new filematches1(x._1,x._2,x._3,x._4)).toDF
Here , I am using new keyword to instantiate the class.
Running above has given me the error:-
error: value toDF is not a member of org.apache.spark.rdd.RDD[filematches1]
I am sure I am missing some key concept on case classes vs regular classes here but not able to find it yet.
To resolve error of
value toDF is not a member of org.apache.spark.rdd.RDD[...]
You should move your case class definition out of function where you are using it. You can refer http://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/Spark-Scala-Error-value-toDF-is-not-a-member-of-org-apache/td-p/29878 for mode detail.
On your Other query - case classes are syntactic sugar and they provide following additional things
Case classes are different from general classes. They are specially used when creating immutable objects.
They have default apply function which is used as constructor to create object. (so Lesser code)
All the variables in case class are by default val type. Hence immutable. which is a good thing in spark world as all red are immutable
example for case class is
case class Book( name : string)
val book1 = Book("test")
you cannot change value of book1.name as it is immutable. and you do not need to say new Book() to create object here.
The class variables are public by default. so you don't need setter and getters.
Moreover while comparing two objects of case classes, their structure is compared instead of references.
Edit : Spark Uses Following class to Infer Schema
Code Link :
https://github.com/apache/spark/blob/branch-2.4/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala
If you check. in schemaFor function (Line 719 to 791). It converts Scala types to catalyst types. I this the case to handle non case classes for schema inference is not added yet. so the every time you try to use non case class with infer schema. It goes to other option and hence gives error of Schema for type $other is not supported.
Hope this helps
Consider the case that I want to deserialize a JSON string:
def deserialize[T](json)
I can provided class that I want to apply the function explicitly while writing code like
class Person(name: String)
deserialize[Person]("""{ "name": "Jennie" }""")
But, what if I need other class, I have to provide it in my code, compile again. I want my program more flexible, it can take a config file that contains name of which class I want to use. So, when ever require a new class, I just need to write the class definition, build it into another jar file, put it in classpath, then restart the program.
val config = ConfigLoader.load("config.txt")
val className = config.getString("class-to-deserialize")
deserialize[<from className to type>](json)
So, is it possible to do that in scala?
No. But because of type erasure, if you have a function def deserialize[T](json: String), its behavior can't depend on T in the first place and it doesn't matter what you pass as the type parameter. You may just need to add a cast at the end.
What is possible is to write such a function which also accepts an implicit ClassTag or TypeTag parameter, in which case you just need to create the parameter from class/type name, and that's entirely possible: just search for questions about this.
Scala case classes can have 22+ properties these days, but AFAIA compiler does not compile apply/unapply methods then.
Is there a way to generate apply/unapply by means of a plugin at compile time or at least generate methods using IDE etc?
Note
please don't start asking - why do you need this? It is for mapping existing JSON schema from a mongoDB using Reactive Mongo
please don't advise to group properties into smaller case classes and etc. Schema was created by someone else & already exists on production.
Thank you for your answers in advance.
Yes, Scala supports >22 fields from version 2.11. However, there are certain limitations - the case class will no more have unapply or unapplyseq and tupled(you'll no longer convert case class to tuple) functions because scala still don't support tuple with more that 22 values.
val tup = (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22) //will compile
val tup = (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23) //will fail
Because of this, case class is much more like regular class class and many other libraries will be unable to fully utilize this case class. Such as, json serializer libraries.
I have faced with this issue when I tried to use macros read/write to serialize case class to json and viceversa in a playframework project it won't compile because case class no longer contain unapply() method. The one work around for this is to provide custom implicit read/writes for the case class instead of using macros.
case class Person(name: String, age: Int, lovesChocolate: Boolean)
implicit val personReads = Json.reads[Person] //this wond work, need to write custom read as below.
implicit val personReads = (
(__ \ 'name).read[String] and
(__ \ 'age).read[Int] and
(__ \ 'lovesChocolate).read[Boolean]
)(Person)
please don't start asking - why do you need this? It is for mapping
existing JSON schema from a mongoDB using Reactive Mongo
I'm assuming your is the same situation, you are using reactivemongo macros for json to/from case class serialization.
implicit val personReader: BSONDocumentReader[Person] = Macros.reader[Person]
implicit val personWriter: BSONDocumentWriter[Person] = Macros.writer[Person]
//or Handler
Macros.handler[Person]
Therefore, I would suggest you to use custom BSON reader and writer for the case class as documented here.
I want to declare a class like this:
class StringSetCreate(val s: String*) {
// ...
}
and call that in Java. The problem is that the constructor is of type
public StringSetCreate(scala.collection.Seq)
So in java, you need to fiddle around with the scala sequences which is ugly.
I know that there is the #annotation.varargs annotation which, if used on a method, generates a second method which takes the java varargs.
This annotation does not work on constructors, at least I don't know where to put it. I found a Scala Issue SI-8383 which reports this problem. As far as I understand there is no solution currently. Is this right? Are there any workarounds? Can I somehow define that second constructor by hand?
The bug is already filed as https://issues.scala-lang.org/browse/SI-8383 .
For a workaround I'd recommend using a factory method (perhaps on the companion object), where #varargs should work:
object StringSetCreate {
#varargs def build(s: String*) = new StringSetCreate(s: _*)
}
Then in Java you call StringSetCreate.build("a", "b") rather than using new.
I want to define some annotations and use them in Scala.
I looked into the source of Scala, found in scala.annotation package, there are some annotations like tailrec, switch, elidable, and so on. So I defined some annotations as them do:
class A extends StaticAnnotation
#A
class X {
#A
def aa() {}
}
Then I write a test:
object Main {
def main(args: Array[String]) {
val x = new X
println(x.getClass.getAnnotations.length)
x.getClass.getAnnotations map { println }
}
}
It prints some strange messages:
1
#scala.reflect.ScalaSignature(bytes=u1" !1* 1!AbCaE
9"a!Q!! 1gn!!.<b iBPE*,7
Ii#)1oY1mC&1'G.Y(cUGCa#=S:LGO/AA!A 1mI!)
Seems I can't get the annotation aaa.A.
How can I create annotations in Scala correctly? And how to use and get them?
FWIW, you can now define scala annotation in scala 2.10 and use reflection to read them back.
Here are some examples:
Reflecting Annotations in Scala 2.10
Could it have something to do with retention? I bet #tailrec is not included in the bytecode getting generated.
If I try to extend ClassfileAnnotation (in order to have runtime retention), Scala tells me that it can't be done, and it has to be done in Java:
./test.scala:1: warning: implementation restriction: subclassing Classfile does not
make your annotation visible at runtime. If that is what
you want, you must write the annotation class in Java.
class A extends ClassfileAnnotation
^
I think you can only define annotations in Java now.
http://www.scala-lang.org/node/106
You can find a nice description of how annotations are to be used in Scala in Programming Scala.
So you can define or use annotations in scala. However there is at least one limitation:
Runtime retention is not quite possible. In theory you should subclass ClassFileAnnotation to achieve this, but currently scalac reports the following warning if you do it:
"implementation restriction: subclassing Classfile does not make your annotation visible at runtime. If that is what you want, you must write the annotation class in Java."
It also means that your code is fine as it is (at least as fine as it is currently possible in Scala), but the annotation is on the class only during compile time. So you could use it e.g. in compiler plugins, but you will not be able to access it runtime.
With scala 2.11.6, this works to extract values of a annotation:
case class MyAnnotationClass(id: String) extends scala.annotation.StaticAnnotation
val myAnnotatedClass: ClassSymbol = u.runtimeMirror(Thread.currentThread().getContextClassLoader).staticClass("MyAnnotatedClass")
val annotation: Option[Annotation] = myAnnotatedClass.annotations.find(_.tree.tpe =:= u.typeOf[MyAnnotationClass])
val result = annotation.flatMap { a =>
a.tree.children.tail.collect({ case Literal(Constant(id: String)) => doSomething(id) }).headOption
}