I have a List[Any] which I want to convert to a JsArray.
List with type works:
Json.arr(List("1"))
But:
Json.arr(List("1").asInstanceOf[List[Any]])
throws:
diverging implicit expansion for type play.api.libs.json.Reads[T1]
starting with method oFormatFromReadsAndOWrites in object OFormat
How can I convert List[Any] to JsArray?
I tried:
implicit val listAnyFormat: OFormat[List[Any]] = Json.format[List[Any]]
But I get thrown with:
No instance of Reads is available for scala.collection.immutable.Nil in the implicit scope
Using Play 2.8.x and Scala 2.11.8
You can't.
At least not without defining a Format[Any] which can be done technically but will likely not cover all the possible cases.
The question is why do you have a List[Any] in the first place? It has not much sense in Scala world.
It would be better if you could have a List[Something] where Something has a known set of subtypes and each of them has a Format.
Related
Based on this description of datasets and dataframes I wrote this very short test code which works.
import org.apache.spark.sql.functions._
val thing = Seq("Spark I am your father", "May the spark be with you", "Spark I am your father")
val wordsDataset = sc.parallelize(thing).toDS()
If that works... why does running this give me a
error: value toDS is not a member of org.apache.spark.rdd.RDD[org.apache.spark.sql.catalog.Table]
import org.apache.spark.sql.functions._
val sequence = spark.catalog.listDatabases().collect().flatMap(db =>
spark.catalog.listTables(db.name).collect()).toSeq
val result = sc.parallelize(sequence).toDS()
toDS() is not a member of RRD[T]. Welcome to the bizarre world of Scala implicits where nothing is what it seems to be.
toDS() is a member of DatasetHolder[T]. In SparkSession, there is an object called implicits. When brought in scope with an expression like import sc.implicits._, an implicit method called rddToDatasetHolder becomes available for resolution:
implicit def rddToDatasetHolder[T](rdd: RDD[T])(implicit arg0: Encoder[T]): DatasetHolder[T]
When you call rdd.toDS(), the compiler first searches the RDD class and all of its superclasses for a method called toDS(). It doesn't find one so what it does is start searching all the compatible implicits in scope. While doing so, it finds the rddToDatasetHolder method which accepts an RDD instance and returns an object of a type which does have a toDS() method. Basically, the compiler rewrites:
sc.parallelize(sequence).toDS()
into
SparkSession.implicits.rddToDatasetHolder(sc.parallelize(sequence)).toDS()
Now, if you look at rddToDatasetHolder itself, it has two argument lists:
(rdd: RDD[T])
(implicit arg0: Encoder[T])
Implicit arguments in Scala are optional and if you do not supply the argument explicitly, the compiler searches the scope for implicits that match the required argument type and passes whatever object it finds or can construct. In this particular case, it looks for an instance of the Encoder[T] type. There are many predefined encoders for the standard Scala types, but for most complex custom types no predefined encoders exist.
So, in short: The existence of a predefined Encoder[String] makes it possible to call toDS() on an instance of RDD[String], but the absence of a predefined Encoder[org.apache.spark.sql.catalog.Table] makes it impossible to call toDS() on an instance of RDD[org.apache.spark.sql.catalog.Table].
By the way, SparkSession.implicits contains the implicit class StringToColumn which has a $ method. This is how the $"foo" expression gets converted to a Column instance for column foo.
Resolving all the implicit arguments and implicit transformations is why compiling Scala code is so dang slow.
I am writing an http client and this is my signature:
def post[Req, Resp](json: Req)(implicit r: Reads[Resp], w: Writes[Req]): Future[Resp]
Using play json behind the scenes.
When I use it like this
def create(req: ClusterCreateRequest): Future[ClusterCreateResponse] = endpoint.post(req)
I get the following error
diverging implicit expansion for type play.api.libs.json.Reads[Resp]
The following works
def create(req: ClusterCreateRequest): Future[ClusterCreateResponse] = endpoint.post[ClusterCreateRequest, ClusterCreateResponse](req)
Why is type inference not working as expected? What can I do for this?
diverging implicit expansion for type play.api.libs.json.Reads[Resp]
means that Resp has few JSON serializers that are not shadowed one by another.
It's not possible to pinpoint root cause the issue and say fix X and everything will work from the infrmation given in post.
But you can try to "debug" implicit search. Consider checking the implicit search order:
Where does Scala look for implicits? Enabling implicit parameter expansion in idea might help to check which implicits(Ctrl+Shift+=) cause a clash.
General advice for type class instances - hold them organized and declared, put them to companion object or to specially dedicated object.
In Scala/Spark DataFrame
dfReduced.schema.fieldNames
is a java String array (String[]). However,
dfReduced.schema.fieldNames.asInstanceOf[Seq[String]]
throws
java.lang.ClassCastException: [Ljava.lang.String; cannot be cast to
scala.collection.Seq
Assigning same array to a Seq[String] is fine.
val f3:Seq[String]=dfReduced.schema.fieldNames
As a Java programmer this surprises me as both would require casting in Java. Can someone explain why there is this distinction in Scala
(Note, I'm not being critical, I just want to understand Scala better)
The reason why val f3:Seq[String]=dfReduced.schema.fieldNames this is working is because In Scala there is implicit conversion available than can cast the Array[T] to Seq[T] implicitly
In Java there is no such type of implicit casting available.
As Leo C mention in comment The difference is run-time type cast versus compile-time type ascription. For more info you can refer to this link.
Hope this clears your dough
Thanks
I have the following statements.
val a: Any = Array("1", "2", "3")
a match {
case p: Array[Int] => println("int")
case l: Array[String] => println("string")
}
val b: Any = List(1, 2, 3)
b match {
case l: List[String] => println("string")
case p: List[Int] => println("int")
}
The first block about Array compiles without warnings and outputs "string", while the second one about List compiles with warnings related to type erasure and outputs "string" as well.
I know something about type erasure in JVM. At runtime, JVM cannot really know the generic type of a container(such as List). But why does Array can avoid type erasure at runtime and get the right type matched?
I tried to find the answer from scala source code. The only thing I found is that Array uses ClassTag but List does not.
I'd like to how ClassTag works. Is ClassTag a workaround of type erasure? And why containers like List haven't been implemented with ClassTag to avoid type erasure.
Scala runs on the JVM and inherits its constraints. Java employs type erasure so all parametrized types are the same in runtime. Type information is erased from them. That was done to keep compatibility with older Java versions that could not use type parameters at all.
But arrays is the special case in Java, they keep type information. So scala arrays do. That was necessary to keep memory efficient unboxed values inside arrays.
You should just assume that all type information is lost during runtime. So use some tags to match against them.
ClassTags are not related to array wrapping. All types information is supplied by JVM itself.
There is custom practise in Java to use AnyRef and dynamic cast every time you get difficulties expressing type relations. Scala provides more expressive power for describing types statically without runtime conversions. And Scala coding style encourage using heavy type constructions for keeping code typesafe.
ClassTags and TypeTags are instruments that can be used only with statically typed code. They contain class and type information that the compiler has derived during compilation time. If it could derive types statically than it can provide type tags for you to access this types.
This is useful when you write some kind of library and have no clue how would it be used. So you require ClassTag as implicit parameter and it would be filled by compiler with appropriate type basing on other argument supplied to the function call. Implicit parameters is placed as requirement by library code and filled automatically by outer code that calls the library.
In these cases you may want to consider using the Typeable type class you get with shapeless for type safe casts. E.g.:
scala> import shapeless.syntax.typeable._
import shapeless.syntax.typeable._
scala> val b: Any = List(1, 2, 3)
b: Any = List(1, 2, 3)
scala> b.cast[List[String]]
res1: Option[List[String]] = None
scala> b.cast[List[Int]]
res2: Option[List[Int]] = Some(List(1, 2, 3))
As you can the cast[T] method, added to every type by shapeless through implicits, returns an Option[T] whose value is None if the cast fails, Some if it's successful.
If you feel like it you could look at the source code of Typeable. However, I suggest you get a nice cup of coffee before doing it. :)
The following Scala code works correctly:
val str1 = "hallo"
val str2 = "huhu"
val zipped: IndexedSeq[(Char, Char)] = str1.zip(str2)
However if I import the implicit method
implicit def stringToNode(str: String): xml.Node = new xml.Text(str)
then the Scala (2.10) compiler shows an error: value zip is not a member of String
It seems that the presence of stringToNode somehow blocks the implicit conversion of str1 and str2 to WrappedString. Why? And is there a way to modify stringToNode such that zip works but stringToNode is still used when I call a function that requires a Node argument with a String?
You have ambiguous implicits here. Both StringOps and xml.Node have the zip-method, therefore the implicit conversion is ambiguous and cannot be resolved. I don't know why it doesn't give a better error message.
Here are some links to back it up:
http://www.scala-lang.org/api/current/index.html#scala.collection.immutable.StringOps
and
http://www.scala-lang.org/api/current/index.html#scala.xml.Node
edit: it was StringOps, not WrappedString, changed the links :) Have a look at Predef: http://www.scala-lang.org/api/current/index.html#scala.Predef$
to see predefined implicits in Scala.
I would avoid using implicits in this case. You want 2 different implicit conversions which both provide a method of the same name (zip). I don't think this is possible. Also, if you import xml.Text, you can convert with just Text(str) which should be concise enough for anyone. If you must have this implicit conversion to xml.Node, I would pack the implicit def into an object and then import it only in the places where you need it to make your code readable and to, possibly, avoid conflicts where you also need to zip strings. But basically, I would very much avoid using implicits just to make convenient conversions.
Like #Felix wrote, it is generally a bad idea to define implicit conversions between similar data types, like the one you used. Doing that weakens type system, leads to ambiguities like you encountered and may produce extremely unclear ("magic") code which is very hard to analyze and debug.
Implicit conversions in Scala are mostly used to define lightweight, short-lived wrappers in order to enrich API of wrapped type. Implicit conversion that converts String into WrappedString falls into that category.
Twitter's Effective Scala has a section about this issue.