Scala timestamp/date zero argument constructor? - scala

Squeryl requires a zero argument constructor when using Option[] in fields. I realized how I could create such a constructor for Long like 0L but how do I create such a thing for a Timestamp or Date?
Essentially I need to finish this:
def this() = this(0L,"",TIMESTAMP,TIMESTAMP,0L,"","","","",Some(""),Some(""),"",DATE,DATE,false,false,false,Some(0L),Some(0),Some(0L))
Below is how I originally found the timestamp and date problem.
Background
Getting the following error in my Play! 2.0 Scala app (also using Squeryl):
Caused by: java.lang.RuntimeException: Could not deduce Option[] type of field 'startOrder' of class models.Job
This field in models.Job:
#Column("start_order")
var startOrder: Option[Int],
And in the Postgres DB it is defined as an integer. Is there different handling in Play! 2.0 of models, is this a bug, or is it a Squeryl problem? Thanks!
Stack trace, looks like Squeryl problem
Caused by: java.lang.RuntimeException: Could not deduce Option[] type of field 'startOrder' of class models.Job
at scala.sys.package$.error(package.scala:27) ~[scala-library.jar:na]
at scala.Predef$.error(Predef.scala:66) ~[scala-library.jar:0.11.2]
at org.squeryl.internals.FieldMetaData$$anon$1.build(FieldMetaData.scala:441) ~[squeryl_2.9.1-0.9.4.jar:na]
at org.squeryl.internals.PosoMetaData$$anonfun$3.apply(PosoMetaData.scala:111) ~[squeryl_2.9.1-0.9.4.jar:na]
at org.squeryl.internals.PosoMetaData$$anonfun$3.apply(PosoMetaData.scala:80) ~[squeryl_2.9.1-0.9.4.jar:na]
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:176) ~[scala-library.jar:0.11.2]

If startOrder is defined as
val startOrder: Option[java.sql.Timestamp]
in class definition. I believe,
Some(new java.sql.Timestamp(0))
should be passed to constructor.

Option is used when a value is optional, i.e. if there could be a value or not. Only if there is a value, you use Some wrapping it. But if there is no value, you use None.

Related

Creating Spark Dataframes from regular classes

I have always seen that, when we are using a map function, we can create a dataframe from rdd using case class like below:-
case class filematches(
row_num:Long,
matches:Long,
non_matches:Long,
non_match_column_desc:Array[String]
)
newrdd1.map(x=> filematches(x._1,x._2,x._3,x._4)).toDF()
This works great as we all know!!
I was wondering , why we specifically need case classes here?
We should be able to achieve same effect using normal classes with parameterized constructors (as they will be vals and not private):-
class filematches1(
val row_num:Long,
val matches:Long,
val non_matches:Long,
val non_match_column_desc:Array[String]
)
newrdd1.map(x=> new filematches1(x._1,x._2,x._3,x._4)).toDF
Here , I am using new keyword to instantiate the class.
Running above has given me the error:-
error: value toDF is not a member of org.apache.spark.rdd.RDD[filematches1]
I am sure I am missing some key concept on case classes vs regular classes here but not able to find it yet.
To resolve error of
value toDF is not a member of org.apache.spark.rdd.RDD[...]
You should move your case class definition out of function where you are using it. You can refer http://community.cloudera.com/t5/Advanced-Analytics-Apache-Spark/Spark-Scala-Error-value-toDF-is-not-a-member-of-org-apache/td-p/29878 for mode detail.
On your Other query - case classes are syntactic sugar and they provide following additional things
Case classes are different from general classes. They are specially used when creating immutable objects.
They have default apply function which is used as constructor to create object. (so Lesser code)
All the variables in case class are by default val type. Hence immutable. which is a good thing in spark world as all red are immutable
example for case class is
case class Book( name : string)
val book1 = Book("test")
you cannot change value of book1.name as it is immutable. and you do not need to say new Book() to create object here.
The class variables are public by default. so you don't need setter and getters.
Moreover while comparing two objects of case classes, their structure is compared instead of references.
Edit : Spark Uses Following class to Infer Schema
Code Link :
https://github.com/apache/spark/blob/branch-2.4/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala
If you check. in schemaFor function (Line 719 to 791). It converts Scala types to catalyst types. I this the case to handle non case classes for schema inference is not added yet. so the every time you try to use non case class with infer schema. It goes to other option and hence gives error of Schema for type $other is not supported.
Hope this helps

Scala convert Map$ to Map

I have an exception:
java.lang.ClassCastException: scala.collection.immutable.Map$ cannot
be cast to scala.collection.immutable.Map
which i'm getting in this part of code:
val iterator = new CsvMapper()
.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES)
.readerFor(Map.getClass).`with`(CsvSchema.emptySchema().withHeader()).readValues(reader)
while (iterator.hasNext) {
println(iterator.next.asInstanceOf[Map[String, String]])
}
So, are there any options to avoid this issue, because this:
val iterator = new CsvMapper()
.disable(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES)
.readerFor(Map[String,String].getClass).`with`(CsvSchema.emptySchema().withHeader()).readValues(reader)
doesn't help, because I get
[error] Unapplied methods are only converted to functions when a function type is expected.
[error] You can make this conversion explicit by writing `apply _` or `apply(_)` instead of `apply`.
Thanks in advance
As has been pointed out in the earlier comments, in general you need classOf[X[_,_]] rather than X.getClass or X[A, B].getClass for a class that takes two generic types. (instance.getClass retrieves the class of the associated instance; classOf[X] does the same for some type X when an instance isn't available. Since Map is an object and objects are also instances, it retrieves the class type of the object Map - the Map trait's companion.)
However, a second problem here is that scala.collection.immutable.Map is abstract (it's actually a trait), and so it cannot be instantiated as-is. (If you look at the type of Scala Map instances created via the companion's apply method, you'll see that they're actually instances of classes such as Map.EmptyMap or Map.Map1, etc.) As a consequence, that's why your modified code still produced an error.
However, the ultimate problem here is that you required - as you mentioned - a Java java.util.Map and not a Scala scala.collections.immutable.Map (which is what you'll get by default it you just type Map in a Scala program). Just one more thing to watch out for when converting Java code examples to Scala. ;-)

Kryo Serialization empty Deserialization

I refactor my code to work with kryo serialization.
Everything works fine except deserialize a property of geomtry from certain class.
No exception is thrown (I set "spark.kryo.registrationRequired" to true).
On debug I try to collect the data and I see that the data in the geomtry is just empty. As a result I understand that the deserialize was fail.
Geomtry is from type of - Any(scala) because it is a complex property maybe.
My question is why the data is empty, And is there connection to the type 'Any' of the property.
Update :
class code: class Entity(val id:String) extends Serializable{
var index:Any = null
var geometry:Any = null
}
geometry contains centeroid, shape and coordinates(complex object)
You should not use Kryo with Scala since the behavior of many Scala classes differs from Java classes and Kryo was originally written to work with Java. You will probably encounter many weird issues like this one if you use Kryo with Scala. You should instead use chill-scala which is an extension of Kryo that handles all of Scala's special cases.

Squeryl doesn't detect annotated val constructor argument as transient

I have a class with a constructor parameter like this:
#Transient val applicationType: Option[String] = None,
However, Squeryl doesn't notice the #Transient annotation, and still tries to read the value of this field from the database. But there is no such field in the database.
My investigations so far have showed that, as I suspected, Squeryl only looks at the method and field annotations, whereas the annotation is only placed by the Scala compiler on the argument of the constructor (I can see this with javap).
So how can I fix this?
The class is not a case class because I'm extending a case class, and case classes shouldn't extend other case classes.
You can also tell scalac that you want the annotation to appear on the field. See this answer for the proper syntax.
Just change the constructor argument to a plain one:
_applicationType: Option[String] = None,
and introduce the val separately
#Transient val applicationType = _applicationType

Am trying to compare String with BigDecimal in openjpa using Criteria builder

list.add(build.equal(bondRoot.get(Bond_.writingCompanyCode),dtlRoot.get(ArPstdDtl_.companycd ).as(String.class)));
but am getting the following error:
Caused by: org.apache.openjpa.persistence.ArgumentException: No metadata
was found for type "class java.lang.String". The class is not
enhanced.
Could someone help me out on this ??
Changing type from BigDecimal to String needs conversion, not a cast. Method as cannot convert from type to other - it is purely for typecasting, as documented:
Perform a typecast upon the expression, returning a new expression
object. This method does not cause type conversion: the runtime type
is not changed. Warning: may result in a runtime failure.
Criteria API does not offer conversion from BigDecimal to String. Database vendor specific functions can be used via CriteriaBuilder.function.