Convert implicit def to scala 3 given syntax - scala

For the following Random UUID generator using cats effect:
import java.util.UUID
import cats.effect.Sync
import cats.ApplicativeThrow
trait UuidGen[F[_]]:
def make: F[UUID]
def read(string: String): F[UUID]
object UuidGen:
def apply[F[_]: UuidGen]: UuidGen[F] = implicitly
implicit def forSync[F[_]: Sync]: UuidGen[F] =
new UuidGen[F]:
def make: F[UUID] =
Sync[F].delay(UUID.randomUUID)
def read(string: String): F[UUID] =
ApplicativeThrow[F].catchNonFatal(UUID.fromString(string))
What is the equivalent of implicit def in scala 3 given syntax?

import java.util.UUID
import cats.effect.Sync
import cats.ApplicativeThrow
trait UuidGen[F[_]]:
def make: F[UUID]
def read(string: String): F[UUID]
object UuidGen:
def apply[F[_]: UuidGen]: UuidGen[F] = summon
given [F[_]: Sync]: UuidGen[F] with
def make: F[UUID] =
Sync[F].delay(UUID.randomUUID)
def read(string: String): F[UUID] =
ApplicativeThrow[F].catchNonFatal(UUID.fromString(string))

Related

Scala Generic Repository Class For Reactive Mongo Repository(alpakka) - Needed Class Found T

Im trying to create a Generic Class in Scala so I can create a repository for different collection without repeating myself.
The problem is that if I do it as a Generic Class(as in this example) I get a problem in this line:
val codecRegistry = fromRegistries(fromProviders(classOf[T]), DEFAULT_CODEC_REGISTRY)
Expected Class but Found [T]
But if I change T for any other class (lets say User) in all the code it works.
This is my class:
package persistence.repository.impl
import akka.stream.Materializer
import akka.stream.alpakka.mongodb.scaladsl.{MongoSink, MongoSource}
import akka.stream.scaladsl.{Sink, Source}
import akka.{Done, NotUsed}
import com.mongodb.reactivestreams.client.MongoClients
import constants.MongoConstants._
import org.bson.codecs.configuration.CodecRegistries.{fromProviders, fromRegistries}
import org.mongodb.scala.MongoClient.DEFAULT_CODEC_REGISTRY
import org.mongodb.scala.bson.codecs.Macros._
import org.mongodb.scala.model.Filters
import persistence.entity.{ProductItem}
import persistence.repository.Repository
import scala.concurrent.{ExecutionContext, Future}
class UserMongoDatabase[T](implicit materializer: Materializer,
executionContext: ExecutionContext)
extends Repository[T] {
val codecRegistry = fromRegistries(fromProviders(classOf[T]), DEFAULT_CODEC_REGISTRY)
val client = MongoClients.create(HOST)
val db = client.getDatabase(DATABASE)
val requestedCollection = db
.getCollection(USER_COLLECTION, classOf[T])
.withCodecRegistry(codecRegistry)
val source: Source[T, NotUsed] =
MongoSource(requestedCollection.find(classOf[T]))
val rows: Future[Seq[T]] = source.runWith(Sink.seq)
override def getAll: Future[Seq[T]] = rows
override def getById(id: AnyVal): Future[Option[T]] = rows.map {
list =>
list.filter {
user => user.asInstanceOf[ {def _id: AnyVal}]._id == id
}.headOption
}
override def getByEmail(email: String): Future[Option[T]] = rows.map {
list =>
list.filter {
user => user.asInstanceOf[ {def email: AnyVal}].email == email
}.headOption
}
override def save(obj: T): Future[T] = {
val source = Source.single(obj)
source.runWith(MongoSink.insertOne(requestedCollection)).map(_ => obj)
}
override def delete(id: AnyVal): Future[Done] = {
val source = Source.single(id).map(i => Filters.eq("_id", id))
source.runWith(MongoSink.deleteOne(requestedCollection))
}
}
This is my repository trait:
package persistence.repository
import akka.Done
import scala.concurrent.Future
trait Repository[T]{
def getAll: Future[Seq[T]]
def getById(id: AnyVal): Future[Option[T]]
def save(user: T): Future[T]
def delete(id: AnyVal): Future[Done]
def getByEmail(email:String): Future[Option[T]]
}
As said in the comments, this is the perfect example of usage of ClassTag in Scala. It allow to retain the actual class of a generic/parameterized class.
class DefaultMongoDatabase[T](implicit ..., ct: ClassTag[T])
extends Repository[T] {
val codecRegistry = fromRegistries(fromProviders(ev.runtimeClass), ...)
(You can move the classtag logic in the trait if you want.)

What can cause "Unexpected tree in genLoad" in a macro expansion?

I was trying to produce a small macro to isolate another problem that I was having and started running into this compile-time error.
Error:scalac:
Unexpected tree in genLoad: test.MacroTest$Baz.type/class scala.reflect.internal.Trees$TypeTree at: source-/Users/jpatterson/test/src/test/scala/test/MacroTest.scala,line-5,offset=114
while compiling: /Users/jpatterson/test/src/test/scala/test/MacroTest.scala
during phase: jvm
library version: version 2.13.0-RC1
compiler version: version 2.13.0-RC1
reconstructed args: -deprecation -Vimplicits -language:higherKinds -language:implicitConversions -language:postfixOps -classpath ... (cut)
last tree to typer: Literal(Constant(test.MacroTest.MacroTest$Baz.type))
tree position: line 4 of /Users/jpatterson/test/src/test/scala/test/MacroTest.scala
tree tpe: Class(classOf[test.MacroTest$Baz$])
symbol: null
call site: constructor MacroTest$$anon$1 in package test
== Source file context for tree position ==
1 package test
2
3 object MacroTest {
4 case class Baz(x: Int, y: Int)
5 implicit def bazRead: Read[Baz] = Read.readFor[Baz]
6
7 def main(args: Array[String]): Unit = {
I started working with scala 2.12.8. I tried switching to 2.13.0-RC1 just to see if it was something that was already fixed. It fails the same with both versions of scala.
The macro code:
package test
import scala.language.experimental.macros
import scala.reflect.macros.whitebox.Context
trait Read[A] {
def read(in: String): A
}
object Read {
implicit def intRead = new Read[Int] {
override def read(in: String): Int = in.toInt
}
def CaseClassReadImpl[A: c.WeakTypeTag](c: Context): c.Expr[Read[A]] = {
import c.universe._
val aType = weakTypeOf[A]
val params = aType.decls.collect {
case m: MethodSymbol if m.isCaseAccessor => m
}.toList
val paramList = params.map(param => q"Read.read[${param.typeSignature}](in)")
val src = q"""
new Read[$aType] {
def read(in: String) = ${aType.companion}.apply(..$paramList)
}
"""
println(src)
c.Expr[Read[A]](src)
}
def readFor[A]: Read[A] = macro CaseClassReadImpl[A]
def read[A](in: String)(implicit A: Read[A]): A = A.read(in)
}
The code that exercises it:
package test
object MacroTest {
case class Baz(x: Int, y: Int)
implicit def bazRead: Read[Baz] = Read.readFor[Baz]
def main(args: Array[String]): Unit = {
println(Read.read[Baz]("4"))
}
}
Compiling the second block causes the error above.
I was expecting this to compile correctly. I put that println into the macro definition so that I could just grab the code and try to compile that. When I add that to the second block, it compiles fine. I can even replace bazRead's value with it and everything works as expected: it prints out Baz(4,4).
Regarding your macro, you're trying to splice a type (aType.companion) into a position where a term is expected (a tpe: Type is transformed into TypeTree(tpe)).
Try to replace ${aType.companion} with ${aType.typeSymbol.companion}.
For deriving type classes it's better to use Shapeless, Magnolia or Scalaz-deriving than raw macros.
For example in Shapeless Read can be derived as follows
import shapeless.{Generic, HList, HNil, ::}
trait Read[A] {
def read(in: String): A
}
object Read {
implicit def intRead: Read[Int] = _.toInt
implicit def hNilRead: Read[HNil] = _ => HNil
implicit def hConsRead[H, T <: HList](implicit r: Read[H], r1: Read[T]): Read[H :: T] =
in => r.read(in) :: r1.read(in)
implicit def caseClassRead[A, L <: HList](implicit gen: Generic.Aux[A, L], r: Read[L]): Read[A] =
in => gen.from(r.read(in))
def read[A](in: String)(implicit A: Read[A]): A = A.read(in)
}
case class Baz(x: Int, y: Int)
Read.read[Baz]("123") // Baz(123,123)

Implicit Encoder for TypedDataset and Type Bounds in Scala

My objective is to create a MyDataFrame class that will know how to fetch data at a given path, but I want to provide type-safety. I'm having some trouble using a frameless.TypedDataset with type bounds on remote data. For example
sealed trait Schema
final case class TableA(id: String) extends Schema
final case class TableB(id: String) extends Schema
class MyDataFrame[T <: Schema](path: String, implicit val spark: SparkSession) {
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
But I keep getting could not find implicit value for evidence parameter of type frameless.TypedEncoder[org.apache.spark.sql.Row]. I know that TypedDataset.create needs an Injection for this to work. But I'm not sure how I would write this for a generic T. I thought the compiler would be able to deduce that since all subtypes of Schema are case classes that it would work.
Anybody ever run into this?
All implicit parameters should be in the last parameter list and this parameter list should be separate from non-implicit ones.
If you try to compile
class MyDataFrame[T <: Schema](path: String)(implicit spark: SparkSession) {
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
you'll see error
Error:(11, 35) could not find implicit value for evidence parameter of type frameless.TypedEncoder[org.apache.spark.sql.Row]
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
So let's just add corresponding implicit parameter
class MyDataFrame[T <: Schema](path: String)(implicit spark: SparkSession, te: TypedEncoder[Row]) {
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
we'll have error
Error:(11, 64) could not find implicit value for parameter as: frameless.ops.As[org.apache.spark.sql.Row,T]
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
So let's add one more implicit parameter
import frameless.ops.As
import frameless.{TypedDataset, TypedEncoder}
import org.apache.spark.sql.{Row, SparkSession}
class MyDataFrame[T <: Schema](path: String)(implicit spark: SparkSession, te: TypedEncoder[Row], as: As[Row, T]) {
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
or with kind-projector
class MyDataFrame[T <: Schema : As[Row, ?]](path: String)(implicit spark: SparkSession, te: TypedEncoder[Row]) {
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
You can create custom type class
trait Helper[T] {
implicit def te: TypedEncoder[Row]
implicit def as: As[Row, T]
}
object Helper {
implicit def mkHelper[T](implicit te0: TypedEncoder[Row], as0: As[Row, T]): Helper[T] = new Helper[T] {
override implicit def te: TypedEncoder[Row] = te0
override implicit def as: As[Row, T] = as0
}
}
class MyDataFrame[T <: Schema : Helper](path: String)(implicit spark: SparkSession) {
val h = implicitly[Helper[T]]
import h._
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
or
class MyDataFrame[T <: Schema](path: String)(implicit spark: SparkSession, h: Helper[T]) {
import h._
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
or
trait Helper[T] {
def create(dataFrame: DataFrame): TypedDataset[T]
}
object Helper {
implicit def mkHelper[T](implicit te: TypedEncoder[Row], as: As[Row, T]): Helper[T] =
(dataFrame: DataFrame) => TypedDataset.create(dataFrame).as[T]
}
class MyDataFrame[T <: Schema : Helper](path: String)(implicit spark: SparkSession) {
def read = implicitly[Helper[T]].create(spark.read.parquet(path))
}
or
class MyDataFrame[T <: Schema](path: String)(implicit spark: SparkSession, h: Helper[T]) {
def read = h.create(spark.read.parquet(path))
}
Corrected version:
import org.apache.spark.sql.Encoder
import frameless.{TypedDataset, TypedEncoder}
class MyDataFrame[T <: Schema](path: String)(implicit
spark: SparkSession,
e: Encoder[T],
te: TypedEncoder[T]
) {
def read: TypedDataset[T] = TypedDataset.create[T](spark.read.parquet(path).as[T])
}
or using context bounds
class MyDataFrame[T <: Schema : Encoder : TypedEncoder](path: String)(implicit
spark: SparkSession
) {
def read: TypedDataset[T] = TypedDataset.create[T](spark.read.parquet(path).as[T])
}
Testing:
I converted a json file {"id": "xyz"} into parquet file and then
sealed trait Schema
final case class TableA(id: String) extends Schema
final case class TableB(id: String) extends Schema
import org.apache.spark.sql.SparkSession
implicit val spark: SparkSession = SparkSession.builder
.master("local")
.appName("Spark SQL basic example")
.getOrCreate()
import spark.implicits._
import frameless.syntax._
val res: TypedDataset[TableA] = new MyDataFrame[TableA]("path/to/parquet/file").read
println(res) // [id: string]
res.foreach(println).run() // TableA(xyz)

spark-shell cannot find the class to be extended

Why cannot I load the file with the following code in spark-shell
import org.apache.spark.sql.types._
import org.apache.spark.sql.Encoder import org.apache.spark.sql.Encoders
import org.apache.spark.sql.expressions.Aggregator
case class Data(i: Int)
val customSummer = new Aggregator[Data, Int, Int] {
def zero: Int = 0
def reduce(b: Int, a: Data): Int = b + a.i
def merge(b1: Int, b2: Int): Int = b1 + b2
def finish(r: Int): Int = r
}.toColumn()
The error:
<console>:47: error: object creation impossible, since:
it has 2 unimplemented members.
/** As seen from <$anon: org.apache.spark.sql.expressions.Aggregator[Data,Int,Int]>, the missing signatures are as follows.
* For convenience, these are usable as stub implementations.
*/
def bufferEncoder: org.apache.spark.sql.Encoder[Int] = ???
def outputEncoder: org.apache.spark.sql.Encoder[Int] = ???
val customSummer = new Aggregator[Data, Int, Int] {
Update: #user8371915's solution works. But the following script cannot be loaded with a different error. I used :load script.sc in the spark-shell.
import org.apache.spark.sql.expressions.Aggregator
class MyClass extends Aggregator
Error:
loading ./script.sc...
import org.apache.spark.sql.expressions.Aggregator
<console>:11: error: not found: type Aggregator
class MyClass extends Aggregator
Update(2017-12-03): it doesn't seem to work within Zeppelin, either.
As per error message you didn't implement bufferEncoder and outputEncoder. Please check API docs for the list of abstract methods that have to be implemented.
These two should suffice:
def bufferEncoder: Encoder[Int] = Encoders.scalaInt
def outputEncoder: Encoder[Int] = Encoders.scalaInt

Partial application of Scala macros

The example illustrating the problem:
import scala.language.experimental.macros
import scala.reflect.macros.blackbox
object Test {
def foo1[A, B]: Unit = macro impl[A, B]
def foo2[A]: Unit = macro impl[A, Option[Int]]
def impl[A: c.WeakTypeTag, B: c.WeakTypeTag](c: blackbox.Context): c.Expr[Unit] = {
import c.universe._
c.echo(c.enclosingPosition, s"A=${weakTypeOf[A]}, B=${weakTypeOf[B]}")
reify(())
}
}
/*
scala> Test.foo1[Int, Option[Int]]
<console>:12: A=Int, B=Option[Int]
Test.foo1[Int, Option[Int]]
^
scala> Test.foo2[Int]
<console>:12: A=Int, B=Option[A] // <--- Expected: A=Int, B=Option[Int]
Test.foo2[Int]
*/
Why did we lost the concrete type in foo2? It looks very similar to foo1.
PS: I've found a solution which could be not the best:
import scala.language.experimental.macros
import scala.reflect.macros.blackbox
import scala.reflect.runtime.universe.TypeTag
object Test {
def foo1[A, B](implicit bTag: TypeTag[B]): Unit = macro impl[A, B]
def foo2[A](implicit bTag: TypeTag[Option[Int]]): Unit = macro impl[A, Option[Int]]
def impl[A: c.WeakTypeTag, B](c: blackbox.Context)(bTag: c.Expr[TypeTag[B]]): c.Expr[Unit] = {
import c.universe._
c.echo(c.enclosingPosition, s"A=${weakTypeOf[A]}, B=${bTag.actualType.typeArgs.head}")
reify(())
}
}
/*
scala> Test.foo1[Int, Option[Int]]
<console>:12: A=Int, B=Option[Int]
Test.foo1[Int, Option[Int]]
^
scala> Test.foo2[Int]
<console>:12: A=Int, B=Option[Int]
Test.foo2[Int]
*/
But an answer for the question is still interesting to me.
type Lambda
def foo2[A]: Unit = macro impl[A, {type A = Option[Int]}]
def impl[A: c.WeakTypeTag, B: c.WeakTypeTag](c: blackbox.Context): c.Expr[Unit] = {
import c.universe._
//Option[Int]
println(c.weakTypeOf[B].members.find(_.isType).get.typeSignature)
reify(())
}