I have the following definition of an enum:
object GraphType extends Enumeration {
type Type = Value
val MESSAGE, REQUEST, ERRORS = Value
}
Now I am trying to map each of the type to the corresponding, new TimeSeries as follows:
val dataSets = ( GraphType.values map (graphType => graphType -> new TimeSeries(graphType)) ).toMap
the type system lists datasets as Map[GraphType.Value, TimeSeries] which is precisely what I want. However, compilation fails with error message:
error: diverging implicit expansion for type scala.collection.generic.CanBuildFrom[ird.replay.gui.GraphType.ValueSet,(ird.replay.gui.GraphType.Value, org.jfree.data.time.TimeSeries),That]
starting with method newCanBuildFrom in object SortedSet
val dataSets = GraphType.values map (graphType => graphType -> new TimeSeries(graphType)) toMap
COuld anyone provide some explanation for this, rather cryptic, error message? Thanks
Try converting the Set of values for the enum to a List first like so:
val dataSets = (GraphType.values.toList.map(gt => (gt, new TimeSeries(gt)))).toMap
Something about that being a Set did not agree with how you were attempting to convert it to a Map, but it seems to work just fine with a List
Related
I'm trying to create a spark UDF to extract a Map of (key, value) pairs from a User defined case class.
The scala function seems to work fine, but when I try to convert that to a UDF in spark2.0, I'm running into the " Schema for type Any is not supported" error.
case class myType(c1: String, c2: Int)
def getCaseClassParams(cc: Product): Map[String, Any] = {
cc
.getClass
.getDeclaredFields // all field names
.map(_.getName)
.zip(cc.productIterator.to) // zipped with all values
.toMap
}
But when I try to instantiate a function value as a UDF it results in the following error -
val ccUDF = udf{(cc: Product, i: String) => getCaseClassParams(cc).get(i)}
java.lang.UnsupportedOperationException: Schema for type Any is not supported
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:716)
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:668)
at org.apache.spark.sql.catalyst.ScalaReflection$.schemaFor(ScalaReflection.scala:654)
at org.apache.spark.sql.functions$.udf(functions.scala:2841)
The error message says it all. You have an Any in the map. Spark SQL and Dataset api does not support Any in the schema. It has to be one of the supported type (which is a list of basic types such as String, Integer etc. a sequence of supported types or a map of supported types).
I want to map a stream of Doubles to a method which takes two parameters, one of them has a default value. I want to use the default parameter so my method has only 1 parameter which I need to pass:
def pow(x:Double, exponent:Double=2.0) = {
math.pow(x,exponent)
}
I've found out that the following works, but I do not understand why:
val res = (1 to 100).map(_.toDouble).map(pow(_))
I'm especially confused because the following does not work (compiler error because of missing type information):
val pow2 = pow(_)
val res = pow2(2.0)
println(res) // expect 4.0
The compiler is not able to infer the type that you will provide to pow2. In the res mapping you explicitly feed it a collection of Doublesand therefore pow(_) does not complain. However, in the case of val pow2 = pow(_) it complains that type parameter is missing. Change it to
val pow2 = pow(_: Double)
val res = pow2(2.0)
println(res)
and it will work just fine. pow(_) will be expanded two x => pow(x) and at this point the compiler cannot infere what's x without the type annotation.
I'm trying how to figure out and work with maps with enums as keys in Scala. Looking at this question, I can instantiate maps, but when I try to update the map in place, I get a type mismatch error. What is going on here?
object MyEnums extends Enumeration {
type MyEnum = Value
val BOB, TED, JEN = Value
}
var mymap = scala.collection.mutable.Map[MyEnums.Value, Long]()
mymap += (MyEnums.JEN -> 100L)
throws:
<console>:38: error: type mismatch;
found : (MyEnums.Value, Long)
required: (MyEnums.Value, Long)
mymap += (MyEnums.JEN -> 100L)
If I do the same thing, but use e.g. strings as the key type, this works as expected.
EDIT: These issues occur when using scala in spark-shell, not the normal scala repl.
What is the best way to resolve the compilation error in the example below? Assume that 'm' must be of type GenMap and I do not have control over the arguments of myFun.
import scala.collection.GenMap
object Test {
def myFun(m: Map[Int, String]) = m
val m: GenMap[Int, String] = Map(1 -> "One", 2 -> "two")
//Build error here on m.seq
// Found scala.collection.Map[Int, String]
// Required scala.collection.immutable.Map[Int, String]
val result = myFun(m.seq)
}
EDIT:
I should have been clearer. In my actual use-case I don't have control over myFun, so I have to pass it a Map. The 'm' also arises from another scala component as a GenMap. I need to convert one to another, but there appears to be a conflict between collection.Map and collection.immutable.Map
m.seq.toMap will solve your problem.
According to the signature presented in the API toMap returns a scala.collection.immutable.Map which is said to be required in your error message. scala.collection.Map returned by the seq method is a more general trait which besides being a parent to immutable map is also a parent to the mutable and concurrent map.
I'm trying to port my application to Scala 2.10.0-M2. I'm seeing some nice improvements with better warnings from compiler. But I also got bunch of errors, all related to me mapping from Enumeration.values.
I'll give you a simple example. I'd like to have an enumeration and then pre-create bunch of objects and build a map that uses enumeration values as keys and then some matching objects as values. For example:
object Phrase extends Enumeration {
type Phrase = Value
val PHRASE1 = Value("My phrase 1")
val PHRASE2 = Value("My phrase 2")
}
class Entity(text:String)
object Test {
val myMapWithPhrases = Phrase.values.map(p => (p -> new Entity(p.toString))).toMap
}
Now this used to work just fine on Scala 2.8 and 2.9. But 2.10.0-M2 gives me following warning:
[ERROR] common/Test.scala:21: error: diverging implicit expansion for type scala.collection.generic.CanBuildFrom[common.Phrase.ValueSet,(common.Phrase.Value, common.Entity),That]
[INFO] starting with method newCanBuildFrom in object SortedSet
[INFO] val myMapWithPhrases = Phrase.values.map(p => (p -> new Entity(p.toString))).toMap
^
What's causing this and how do you fix it?
It's basically a type mismatch error. You can work around it by first converting is to a list:
scala> Phrase.values.toList.map(p => (p, new Entity(p.toString))).toMap
res15: scala.collection.immutable.Map[Phrase.Value,Entity] = Map(My phrase 1 -> Entity#d0e999, My phrase 2 -> Entity#1987acd)
For more information, see the answers to What's a “diverging implicit expansion” scalac message mean? and What is a diverging implicit expansion error?
As you can see from your error, the ValueSet that holds the enums became a SortedSet at some point. It wants to produce a SortedSet on map, but can't sort on your Entity.
Something like this works with case class Entity:
implicit object orderingOfEntity extends Ordering[Entity] {
def compare(e1: Entity, e2: Entity) = e1.text compare e2.text
}