scala json4s, can't convert LocalDate - scala

I'm having an issue with org.json4s (scala), joda.time.LocalDate and org.json4s.ext.JodaTimeSerializers. Given that JodaTimeSerializers.all has a LocalDate conversion in it, i was hoping that i could do the following, but I get the exception shown after
scala> import org.json4s.JString
import org.json4s.JString
scala> import org.joda.time.LocalDate
import org.joda.time.LocalDate
scala> import org.json4s.ext.JodaTimeSerializers
import org.json4s.ext.JodaTimeSerializers
scala> import org.json4s._
import org.json4s._
scala> implicit val formats: Formats = DefaultFormats ++ JodaTimeSerializers.all
formats: org.json4s.Formats = org.json4s.Formats$$anon$3#693d3d7f
scala> val jDate = JString("2016-01-26")
jDate: org.json4s.JsonAST.JString = JString(2016-01-26)
scala> jDate.extract[LocalDate]
org.json4s.package$MappingException: Can't convert JString(2016-01-26) to class org.joda.time.LocalDate
Other the other hand, this works (not surprisingly)
scala> val jodaDate = LocalDate.parse(jDate.values)
I've tried to create a custom Serializer, which never gets called b/c it falls into the JodaSerializer realm it seems. I have also created a custom Deserializer that will work with Java.time.LocalDate (int and bytes from strings), but java.time.LocalDate messes with some other code which is likely a different question...this one is: I'm looking for clues by JodaTimeSerializers.all can not parse JString(2016-01-26), or any date string.
The top of the exception is: org.json4s.package$MappingException:
Can't convert JString(2016-01-01) to class org.joda.time.LocalDate (JodaTimeSerializers.scala:126)
Edit
This is still biting me, so dug a bit further and its reproducible with the following.
import org.joda.time.LocalDate
import org.json4s.ext.JodaTimeSerializers
import org.json4s._
implicit val formats: Formats = DefaultFormats ++ JodaTimeSerializers.all
import org.joda.time.LocalDate
case class MyDate(myDate: LocalDate)
val stringyDate =
"""
{
"myDate" : "2016-01-01"
}
"""
import org.json4s.jackson.JsonMethods.parse
parse(stringyDate).extract[MyDate]
org.json4s.package$MappingException: No usable value for myDate
Can't convert JString(2016-01-01) to class org.joda.time.LocalDate
This seems to happen b/c on line 125 of JodaTimeSerializers.scala, its not a JObject, it is a JString, so it falls into the value case on line 126, which throws the error.
Adding this here in case it bites someone else and hopefully get some assistance fixing it...but now i'm late. I have moved the code locally hopefully to come up with a fix tomorrow.

This works. I define a custom serializer for LocalDate.
import org.json4s.JString
import org.joda.time.LocalDate
import org.json4s._
case object LocalDateSerializer
extends CustomSerializer[LocalDate](
format =>
({
case JString(s) => LocalDate.parse(s)
}, Map() /* TO BE IMPLEMENTED */)
)
implicit val formats: Formats = DefaultFormats + LocalDateSerializer
val jDate = JString("2016-01-26")
jDate.extract[LocalDate] // res173: org.joda.time.LocalDate = 2016-01-26

The new serializers are included in the library, but not in the default formats:
implicit val formats: Formats = DefaultFormats ++ JavaTimeSerializers.all

Related

ParTraverse Not a Value of NonEmptyList

I am following the instructions on the cats IO website to run a sequence of effects in parallel:
My code looks like:
val maybeNonEmptyList: Option[NonEmptyList[Urls]] = NonEmptyList.fromList(urls)
val maybeDownloads: Option[IO[NonEmptyList[Either[Error, Files]]]] = maybeNonEmptyList map { urls =>
urls.parTraverse(url => downloader(url))
}
But I get a compile time error saying:
value parTraverse is not a member of cats.data.NonEmptyList[Urls]
[error] urls.parTraverse(url => downloader(url))
I have imported the following:
import cats.data.{EitherT, NonEmptyList}
import cats.effect.{ContextShift, IO, Timer}
import cats.implicits._
import cats.syntax.parallel._
and also i have the following implicits:
implicit val cs: ContextShift[IO] = IO.contextShift(ExecutionContext.global)
implicit val timer: Timer[IO] = IO.timer(ExecutionContext.global)
Why do i still get the problem?
This is caused because the implicit enrichment is being imported twice, so it becomes ambiguous
import cats.implicits._
import cats.syntax.parallel._
As of recent versions of cats, implicits imports are never required, only syntax
The recommended pattern is to do only import cats.syntax.all._

Transform a dataframe to a dataset using case class spark scala

I wrote the following code which aims to transform a dataframe to a dataset using a case class
def toDs[T](df: DataFrame): Dataset[T] = {
df.as[T]
}
then case class DATA( name:String, age:Double, location:String)
I am getting:
Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
[error] df.as[T]
Any idea how to fix this
You can read the data into a Dataset[MyCaseClass] in the following two ways:
Say you have the following class: case class MyCaseClass
1) First way: Import sparksession implicits in the scope and use the as operator to convert your DataFrame to Dataset[MyCaseClass]:
case class MyCaseClass
val spark: SparkSession = SparkSession.builder.enableHiveSupport.getOrCreate()
import spark.implicits._
val ds: Dataset[MyCaseClass]= spark.read.format("FORMAT_HERE").load().as[MyCaseClass]
2) You can create you own encoder in another object and import them in your current code
package com.funky.package
import org.apache.spark.sql.{Encoder, Encoders}
case class MyCaseClass
object MyCustomEncoders{
implicit val mycaseClass:Encoder[MyCaseClass] = Encoders.product[MyCaseClass]
}
In the file containing the main method, import the above implicit value
import com.funky.package.MyCustomEncoders
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.Dataset
val spark: SparkSession = SparkSession.builder.enableHiveSupport.getOrCreate()
val ds: Dataset[MyCaseClass]= spark.read.format("FORMAT_HERE").load().as[MyCaseClass]

Rewrite scala code to be more functional

I am trying to teach myself Scala whilst at the same time trying to write code that is idiomatic of a functional language, i.e. write better, more elegant, functional code.
I have the following code that works OK:
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SparkSession}
import java.time.LocalDate
object DataFrameExtensions_ {
implicit class DataFrameExtensions(df: DataFrame){
def featuresGroup1(groupBy: Seq[String], asAt: LocalDate): DataFrame = {df}
def featuresGroup2(groupBy: Seq[String], asAt: LocalDate): DataFrame = {df}
}
}
import DataFrameExtensions_._
val spark = SparkSession.builder().config(new SparkConf().setMaster("local[*]")).enableHiveSupport().getOrCreate()
import spark.implicits._
val df = Seq((8, "bat"),(64, "mouse"),(-27, "horse")).toDF("number", "word")
val groupBy = Seq("a","b")
val asAt = LocalDate.now()
val dataFrames = Seq(df.featuresGroup1(groupBy, asAt),df.featuresGroup2(groupBy, asAt))
The last line bothers me though. The two functions (featuresGroup1, featuresGroup2) both have the same signature:
scala> :type df.featuresGroup1(_,_)
(Seq[String], java.time.LocalDate) => org.apache.spark.sql.DataFrame
scala> :type df.featuresGroup2(_,_)
(Seq[String], java.time.LocalDate) => org.apache.spark.sql.DataFrame
and take the same vals as parameters so I assume I can write that line in a more functional way (perhaps using .map somehow) that means I can write the parameter list just once and pass it to both functions. I can't figure out the syntax though. I thought maybe I could construct a list of those functions but that doesn't work:
scala> Seq(featuresGroup1, featuresGroup2)
<console>:23: error: not found: value featuresGroup1
Seq(featuresGroup1, featuresGroup2)
^
<console>:23: error: not found: value featuresGroup2
Seq(featuresGroup1, featuresGroup2)
^
Can anyone help?
I thought maybe I could construct a list of those functions but that doesn't work:
Why are you writing just featuresGroup1/2 here when you already had the correct syntax df.featuresGroup1(_,_) just above?
Seq(df.featuresGroup1(_,_), df.featuresGroup2(_,_)).map(_(groupBy, asAt))
df.featuresGroup1 _ should work as well.
df.featuresGroup1 by itself would work if you had an expected type, e.g.
val dataframes: Seq[(Seq[String], LocalDate) => DataFrame] =
Seq(df.featuresGroup1, df.featuresGroup2)
but in this specific case providing the expected type is more verbose than using lambdas.
I thought maybe I could construct a list of those functions but that doesn't work
You need to explicitly perform eta expansion to turn methods into functions (they are not the same in Scala), by using an underscore operator:
val funcs = Seq(featuresGroup1 _, featuresGroup2 _)
or by using placeholders:
val funcs = Seq(featuresGroup1(_, _), featuresGroup2(_, _))
And you are absolutely right about using map operator:
val dataFrames = funcs.map(f => f(groupBy, asAdt))
I strongly recommend against using implicits of types String or Seq, as if used in multiple places, these lead to subtle bugs that are not immediately obvious from the code and the code will be prone to breaking when it's moved somewhere.
If you want to use implicits, wrap them into a custom types:
case class DfGrouping(groupBy: Seq[String]) extends AnyVal
implicit val grouping: DfGrouping = DfGrouping(Seq("a", "b"))
Why no just create a function in DataFrameExtensions to do so?
def getDataframeGroups(groupBy: Seq[String], asAt: String) = Seq(featuresGroup1(groupBy,asAt), featuresGroup2(groupBy,asAt))
I think you could create a list of functions as below:
val funcs:List[DataFrame=>(Seq[String], java.time.LocalDate) => org.apache.spark.sql.DataFrame] = List(_.featuresGroup1, _.featuresGroup1)
funcs.map(x => x(df)(groupBy, asAt))
It seems you have a list of functions which convert a DataFrame to another DataFrame. If that is the pattern, you could go a little bit further with Endo in Scalaz
I like this answer best, courtesy of Alexey Romanov.
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SparkSession}
import java.time.LocalDate
object DataFrameExtensions_ {
implicit class DataFrameExtensions(df: DataFrame){
def featuresGroup1(groupBy: Seq[String], asAt: LocalDate): DataFrame = {df}
def featuresGroup2(groupBy: Seq[String], asAt: LocalDate): DataFrame = {df}
}
}
import DataFrameExtensions_._
val spark = SparkSession.builder().config(new SparkConf().setMaster("local[*]")).enableHiveSupport().getOrCreate()
import spark.implicits._
val df = Seq((8, "bat"),(64, "mouse"),(-27, "horse")).toDF("number", "word")
val groupBy = Seq("a","b")
val asAt = LocalDate.now()
Seq(df.featuresGroup1(_,_), df.featuresGroup2(_,_)).map(_(groupBy, asAt))

Error when parsing a line from the data into the class. Spark Mllib

I've this code implemented:
scala> import org.apache.spark._
scala> import org.apache.spark.rdd.RDD
import org.apache.spark.rdd.RDD
scala> import org.apache.spark.util.IntParam
import org.apache.spark.util.IntParam
scala> import org.apache.spark.graphx._
import org.apache.spark.graphx._
scala> import org.apache.spark.graphx.util.GraphGenerators
import org.apache.spark.graphx.util.GraphGenerators
scala> case class Transactions(ID:Long,Chain:Int,Dept:Int,Category:Int,Company:Long,Brand:Long,Date:String,ProductSize:Int,ProductMeasure:String,PurchaseQuantity:Int,PurchaseAmount:Double)
defined class Transactions
When I try to run this:
def parseTransactions(str:String): Transactions = {
| val line = str.split(",")
| Transactions(line(0),line(1),line(2),line(3),line(4),line(5),line(6),line(7),line(8),line(9),line(10))
| }
I am obtaining this error: :38: error: type mismatch;
found : String
required: Long
Anyone knows why I'm getting this error? I am doing a social netowork analysis over the schema that I put above.
Many thanks!
You are creating array from "," separated values which returns String array. Cast it to appropriate type before assigning to case class arguments.
val line = str.split(",")
line(0).toLong

Scala Pickling usage MyObject -> Array[Byte] -> MyObject

I was trying to get into the new Scala Pickling library that was presented at the ScalaDays 2013: Scala Pickling
What I am really missing are some simple examples how the library is used.
I understood that I can pickle some object an unpickle it again like that:
import scala.pickling._
val pckl = List(1, 2, 3, 4).pickle
val lst = pckl.unpickle[List[Int]]
In this example, pckl is of the type Pickle. What exactly is the use of this type and how can I get for example get an Array[Byte] of it?
If you want wanted to pickle into bytes, then the code will look like this:
import scala.pickling._
import binary._
val pckl = List(1, 2, 3, 4).pickle
val bytes = pckl.value
If you wanted json, the code would look almost the exact same with a minor change of imports:
import scala.pickling._
import json._
val pckl = List(1, 2, 3, 4).pickle
val json = pckl.value
How the object is pickled depends on the import type that you chose under scala.pickling (being either binary or json). Import binary and the value property is an Array[Byte]. Import json and it's a json String.