How is the <> method resolved on a tuple by Slick - scala

Linked from this question
I came across Slick's documentation and found it mandates a def * method in the definition of a table to get a mapped projection.
So the line looks like this
def * = (name, id.?).<>(User.tupled,User.unapply)
Slick example here
I see the <> method is invoked on a tuple - in this case a Tuple2. The method is defined on the case class ShapedValue in Slick's code. How do I find out the implicit method that is doing the lookup?
Here are my imports:
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration.Duration
import slick.driver.H2Driver.api._
import slick.lifted.ShapedValue
import slick.lifted.ProvenShape

So i figured that one out for myself.
The object Shape implements three traits namely ConstColumnShapeImplicits , AbstractTableShapeImplicits and TupleShapeImplicits . These three traits handle the implicit conversions concerning Shapes in Slick .
The TupleShapeImplicits houses all implicit conversion methods required to convert a Tuple to a TupleShape.
Now in the line (name, id.?, salary.?).<>(User.tupled,User.unapply) what is happening is that the the method <> has a implicit parameter of Shape
The Shape class thus comes in scope for the implicit conversion. And the TupleShapeImplicits comes into scope as well.

Related

Providing implicit evidence for context bounds on Object

I'm trying to write some abstractions in some Spark Scala code, but running into some issues when using objects. I'm using Spark's Encoder which is used to convert case classes to database schema's here as an example, but I think this question goes for any context bound.
Here is a minimal code example of what I'm trying to do:
package com.sample.myexample
import org.apache.spark.sql.Encoder
import scala.reflect.runtime.universe.TypeTag
case class MySparkSchema(id: String, value: Double)
abstract class MyTrait[T: TypeTag: Encoder]
object MyObject extends MyTrait[MySparkSchema]
Which fails with the following compilation error:
Unable to find encoder for type com.sample.myexample.MySparkSchema. An implicit Encoder[com.sample.myexample.MySparkSchema] is needed to store com.sample.myexample.MySparkSchema instances in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
I tried defining the implicit evidence in the object like such: (the import statement was suggested by IntelliJ, but it looks a bit weird)
import com.sample.myexample.MyObject.encoder
object MyObject extends MyTrait[MySparkSchema] {
implicit val encoder: Encoder[MySparkSchema] = Encoders.product[MySparkSchema]
}
Which fails with the error message
MyTrait.scala:13:25: super constructor cannot be passed a self reference unless parameter is declared by-name
One other thing I tried is to convert the object to a class and provide implicit evidence to the constructor:
class MyObject(implicit evidence: Encoder[MySparkSchema]) extends MyTrait[MySparkSchema]
This compiles and works fine, but at the expense of MyObject now being a class instead.
Question: Is it possible to provide implicit evidence for the context bounds when extending a trait? Or does the implicit evidence force me to make a constructor and use class instead?
Your first error almost gives you the solution, you have to import spark.implicits._ for Product types.
You could do this:
val spark: SparkSession = SparkSession.builder().getOrCreate()
import spark.implicits._
Full Example
package com.sample.myexample
import org.apache.spark.sql.Encoder
import scala.reflect.runtime.universe.TypeTag
case class MySparkSchema(id: String, value: Double)
abstract class MyTrait[T: TypeTag: Encoder]
val spark: SparkSession = SparkSession.builder().getOrCreate()
import spark.implicits._
object MyObject extends MyTrait[MySparkSchema]

How can I define an instance of a typeclass in scala that can be used for all subclasses of a particular type?

I am trying to define an instance of Show (from cats 0.9) that can be used for all members of an ADT as follows:
import $ivy.`org.typelevel::cats:0.9.0`, cats.Show
sealed abstract class Colour(val name: String)
implicit val ColourShow = new Show[Colour] {
def show(c: Colour) = c.name
}
object Colour {
object Red extends Colour("Red")
object Blue extends Colour("Blue")
}
import Show._
println(Colour.Red.show)
An applicable instance cannot be found for Red, however:
Compiling /Users/Rich/Projects/worksheets/fp-patterns/Colours.sc
/Users/Rich/Projects/worksheets/fp-patterns/Colours.sc:16: value show is not a member of object ammonite.$file.Colours.Colour.Red
val res_5 = println(Colour.Red.show)
^
Compilation Failed
Is it possible to use typeclasses in this way? I am trying to avoid having to define a separate instance for each concrete instanct of Colour.
I think you're mistaking what's happening here. The implicit you've defined does actually work for the instances.
eg.
ColourShow.show(Colour.Red)
If you want to be able to call show() on an instance of a Colour, without any arguments, you'll need to provide a definition of a trait that has a method show, which takes no arguments, and an implicit conversion from Colour to that trait.
Additionally to what people have pointed out, you'll need to import import cats.implicits._
see a working example in: https://scastie.scala-lang.org/d1egoaz/LVaJEccDSeas9VmzHqf1ug/1
You can also use the shorter version to create a Show instance for Colour:
implicit val colourShow: Show[Colour] = Show.show[Colour](_.name)

How to implement a trait with a generic case class that creates a dataset in Scala

I want to create a Scala trait that should be implemented with a case class T. The trait is simply to load data and transform it into a Spark Dataset of type T. I got the error that no encoder can be stored, which I think is because Scala does not know that T should be a case class. How can I tell the compiler that? I've seen somewhere that I should mention Product, but there is no such class defined.. Feel free to suggest other ways to do this!
I have the following code but it is not compiling with the error: 42: error: Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._
[INFO] .as[T]
I'm using Spark 1.6.1
Code:
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{Dataset, SQLContext}
/**
* A trait that moves data on Hadoop with Spark based on the location and the granularity of the data.
*/
trait Agent[T] {
/**
* Load a Dataframe from the location and convert into a Dataset
* #return Dataset[T]
*/
protected def load(): Dataset[T] = {
// Read in the data
SparkContextKeeper.sqlContext.read
.format("com.databricks.spark.csv")
.load("/myfolder/" + location + "/2016/10/01/")
.as[T]
}
}
Your code is missing 3 things:
Indeed, you must let compiler know that T is subclass of Product (the superclass of all Scala case classes and Tuples)
Compiler would also require the TypeTag and ClassTag of the actual case class. This is used implicitly by Spark to overcome type erasure
import of sqlContext.implicits._
Unfortunately, you can't add type parameters with context bounds in a trait, so the simplest workaround would be to use an abstract class instead:
import scala.reflect.runtime.universe.TypeTag
import scala.reflect.ClassTag
abstract class Agent[T <: Product : ClassTag : TypeTag] {
protected def load(): Dataset[T] = {
val sqlContext: SQLContext = SparkContextKeeper.sqlContext
import sqlContext.implicits._
sqlContext.read.// same...
}
}
Obviously, this isn't equivalent to using a trait, and might suggest that this design isn't the best fit for the job. Another alternative is placing load in an object and moving the type parameter to the method:
object Agent {
protected def load[T <: Product : ClassTag : TypeTag](): Dataset[T] = {
// same...
}
}
Which one is preferable is mostly up to where and how you're going to call load and what you're planning to do with the result.
You need to take two actions :
Add import sparkSession.implicits._ in your imports
Make your trait trait Agent[T <: Product]

ReactiveMongo findOne gives ambiguous implicit values

My relevant imports are:
import play.api.libs.concurrent.Execution.Implicits._
import play.api.libs.json.Jsonimport play.modules.reactivemongo.json._
import play.modules.reactivemongo.ReactiveMongoApi
import play.modules.reactivemongo.json.collection.JSONCollection
import reactivemongo.api.commands.WriteResult
import reactivemongo.extensions.json.dao.JsonDao
import reactivemongo.extensions.json.dsl.JsonDsl._
The code which causes problem is
myCollection.find(Json.obj("email" -> email)).one
gives ambiguous implicit values: both object BSONDoubleFormat in trait BSONFormats of type play.modules.reactivemongo.json.BSONDoubleFormat.type and object BSONStringFormat in trait BSONFormats of type play.modules.reactivemongo.json.BSONStringFormat.type match expected type play.api.libs.json.Reads[T] myCollection.find(Json.obj("email" -> email)).one
As I understand I need to somehow define which format object should be used. But I don't understand how this can be done. The other problem is that I'm using JSON objects not BSON's to store data in Mongo, thus I don't understand why it is complaining BSONDoubleFormat & BSONStringFormat objects.
If you look at the documentation and examples, you can see that the function is .one[T], not .one.
As you don't indicate the result type T, it cannot compile.
myCollection.find(Json.obj("email" -> email)).one[T]

How does Scala use explicit types when resolving implicits?

I have the following code which uses spray-json to deserialise some JSON into a case class, via the parseJson method.
Depending on where the implicit JsonFormat[MyCaseClass] is defined (in-line or imported from companion object), and whether there is an explicit type provided when it is defined, the code may not compile.
I don't understand why importing the implicit from the companion object requires it to have an explicit type when it is defined, but if I put it inline, this is not the case?
Interestingly, IntelliJ correctly locates the implicit parameters (via cmd-shift-p) in all cases.
I'm using Scala 2.11.7.
Broken Code - Wildcard import from companion object, inferred type:
import SampleApp._
import spray.json._
class SampleApp {
import MyJsonProtocol._
val inputJson = """{"children":["a", "b", "c"]}"""
println(s"Deserialise: ${inputJson.parseJson.convertTo[MyCaseClass]}")
}
object SampleApp {
case class MyCaseClass(children: List[String])
object MyJsonProtocol extends DefaultJsonProtocol {
implicit val myCaseClassSchemaFormat = jsonFormat1(MyCaseClass)
}
}
Results in:
Cannot find JsonReader or JsonFormat type class for SampleAppObject.MyCaseClass
Note that the same thing happens with an explicit import of the myCaseClassSchemaFormat implicit.
Working Code #1 - Wildcard import from companion object, explicit type:
Adding an explicit type to the JsonFormat in the companion object causes the code to compile:
import SampleApp._
import spray.json._
class SampleApp {
import MyJsonProtocol._
val inputJson = """{"children":["a", "b", "c"]}"""
println(s"Deserialise: ${inputJson.parseJson.convertTo[MyCaseClass]}")
}
object SampleApp {
case class MyCaseClass(children: List[String])
object MyJsonProtocol extends DefaultJsonProtocol {
//Explicit type added here now
implicit val myCaseClassSchemaFormat: JsonFormat[MyCaseClass] = jsonFormat1(MyCaseClass)
}
}
Working Code #2 - Implicits inline, inferred type:
However, putting the implicit parameters in-line where they are used, without the explicit type, also works!
import SampleApp._
import spray.json._
class SampleApp {
import DefaultJsonProtocol._
//Now in-line custom JsonFormat rather than imported
implicit val myCaseClassSchemaFormat = jsonFormat1(MyCaseClass)
val inputJson = """{"children":["a", "b", "c"]}"""
println(s"Deserialise: ${inputJson.parseJson.convertTo[MyCaseClass]}")
}
object SampleApp {
case class MyCaseClass(children: List[String])
}
After searching for the error message Huw mentioned in his comment, I was able to find this StackOverflow question from 2010: Why does this explicit call of a Scala method allow it to be implicitly resolved?
This led me to this Scala issue created in 2008, and closed in 2011: https://issues.scala-lang.org/browse/SI-801 ('require explicit result type for implicit conversions?')
Martin stated:
I have implemented a slightly more permissive rule: An implicit conversion without explicit result type is visible only in the text following its own definition. That way, we avoid the cyclic reference errors. I close for now, to see how this works. If we still have issues we migth come back to this.
This holds - if I re-order the breaking code so that the companion object is declared first, then the code compiles. (It's still a little weird!)
(I suspect I don't see the 'implicit method is not applicable here' message because I have an implicit value rather than a conversion - though I'm assuming here that the root cause is the same as the above).