The output of the following code is just list (no number gets printed).
trait TC[A]:
inline def show(value: A): Unit
object TC:
given TC[Int] with
inline def show(value: Int): Unit = println(value)
given[A:TC]: TC[List[A]] with
inline def show(value: List[A]): Unit =
println("list")
value.foreach(x => summon[TC[A]].show(x))
#main
def main: Unit =
import TC._
val value = 1 :: 2 :: 3 :: Nil
summon[TC[List[Int]]].show(value)
If I remove inline then everything will be fine. Am I invoking an undefined behavior?
Related
I have 4 methods with one logic and 4 possible type mapping:
def convertStringToString(in: String): String = ???
def convertIntToString(in: Int): String = ???
def convertIntToInt(in: Int): Int = ???
def convertStringToInt(in: String): Int = ???
I want to generalize input and output type and write logic in one methods. Tried to generelize input parameter:
def convertToInt[IN](in: IN): Int = in match {
case x: String if x.forall(_.isDigit) => x.toInt
case y: Int => y
case _ => 0
}
def convertToString[IN](in: IN): String = convertToInt[IN](in).toString
Could you help me to generalize second:
def convertToInt[IN, OUT](in: IN): OUT = ???
If you really wanted to, you could have something typeclass-based:
def convert[I, O](in: I)(implicit c: ConversionRule[I, O]): O = {
if (c.isConvertible(in)) c.convert(in)
else c.zero
}
trait ConversionRule[I, O] {
def isConvertible(in: I): Boolean
def convert(in: I): O
def zero: O // Could possibly derive the zero from, e.g., a cats Monoid instance where such exists
}
The eagle-eyed may notice that the isConvertible/convert methods match the contract of PartialFunction[I, O]'s isDefinedAt/apply, so may as well just use PartialFunction (and rewrite convert with isDefinedAt/apply)
trait ConversionRule[I, O] extends PartialFunction[I, O] {
def zero: O
}
zero can be implemented in terms of PartialFunction.applyOrElse, but for the case where zero is constant (which is the case where referential transparency is preserved), this is much faster.
Smart constructors can be defined:
object ConversionRule {
def apply[I, O](zeroValue: O)(pf: PartialFunction[I, O]): ConversionRule[I, O] =
new ConversionRule[I, O] {
override def apply(i: I): O = pf(i)
override def isDefinedAt(i: I): Boolean = pf.isDefinedAt(i)
val zero: O = zeroValue
}
def totalConversion[I, O](f: I => O): ConversionRule[I, O] =
new ConversionRule[I, O] {
override def apply(i: I) = f(i)
override def isDefinedAt(i: I) = true
override def zero: O = throw new AssertionError("Should not call since conversion is defined")
}
// Might want to put this in a `LowPriorityImplicits` trait which this object extends
implicit def identityConversion[I]: ConversionRule[I, I] =
totalConversion(identity)
}
identityConversion means that a convertIntToInt gets automatically generated.
convertStringToInt can then be defined as
implicit val stringToIntConversion = ConversionRule[String, Int](0) {
case x if x.forAll(_.isDigit) => x.toInt
}
One can define a toString based conversion (basically the non-lawful Show proposed for alleycats):
implicit def genericToString[I]: ConversionRule[I, String] =
ConversionRule.totalConversionRule(_.toString)
And it should then be possible to define a stringViaInt ConversionRule derivation like:
implicit def stringViaInt[I, O](implicit toInt: ConversionRule[I, Int]): ConversionRule[I, String] =
convert(convert(in)(toInt))
The only really useful thing this provides is an opt-in to usage of implicit conversions. Whether that's enough of a gain to justify? shrug
(Disclaimer: only the scala compiler in my head has attempted to compile this)
Scala compiler detects the following two map functions as duplicates conflicting with each other:
class ADT {
def map[Output <: AnyVal](f: Int => Output): List[Output] = ???
def map[Output >: Null <: AnyRef](f: Int => Output): List[Output] = ???
}
The class type of Output parameter is different. First one limits to AnyVal and second one limits to AnyRef. How can I differentiate them?
The problem is not differentiating AnyVal from AnyRef so much as getting around the fact that both method signatures become the same after erasure.
Here is a neat trick to get around this kind of problem. It is similar to what #som-snytt did, but a bit more generic, as it works for other similar situations as well (e.g. def foo(f: Int => String): String = ??? ; def foo(f: String => Int): Int = ??? etc.):
class ADT {
def map[Output <: AnyVal](f: Int => Output): List[Output] = ???
def map[Output >: Null <: AnyRef](f: Int => Output)(implicit dummy: DummyImplicit): List[Output] = ???
}
The cutest thing is that this works "out of the box". Apparently, a DummyImplicit is a part of standard library, and you always have the thing in scope.
You can have more than two overloads this way too by just adding more dummies to the list.
scala 2.13.0-M5> :pa
// Entering paste mode (ctrl-D to finish)
object X {
def map[Output <: AnyVal](f: Int => Output) = 1
def map[O](f: Int => O)(implicit ev: O <:< AnyRef) = 2
}
// Exiting paste mode, now interpreting.
defined object X
scala 2.13.0-M5> X.map((x: Int) => x*2)
res0: Int = 1
scala 2.13.0-M5> X.map((x: Int) => "")
res1: Int = 2
You could use a typeclass for that map method.
Using your exact example:
trait MyTC[Output]{
def map(f: Int => Output): List[Output]
}
object MyTC{
def apply[A](a : A)(implicit ev : MyTC[A]) : MyTC[A] = ev
implicit def anyRefMyTc[A <: AnyRef] : MyTC[A] = new MyTC[A]{
def map(f: Int => A): List[A] = { println("inside sub-AnyRef"); List.empty }
}
implicit def anyValMyTc[A <: AnyVal] : MyTC[A] = new MyTC[A]{
def map(f: Int => A): List[A] = { println("inside sub-AnyVal"); List.empty }
}
}
import MyTC._
val r1 = Option("Test1")
val r2 = List(5)
val v1 = true
val v2 = 6L
// The functions here are just to prove the point, and don't do anything.
MyTC(r1).map(_ => None)
MyTC(r2).map(_ => List.empty)
MyTC(v1).map(_ => false)
MyTC(v2).map(_ => 10L)
That would print:
inside sub-AnyRef
inside sub-AnyRef
inside sub-AnyVal
inside sub-AnyVal
The advantage of this approach is that, should you then choose to specialise the behaviour further for just some specific type (e.g. say you want to do something specific for Option[String]), you can do that easily:
// This is added to MyTC object
implicit val optMyTc : MyTC[Option[String]] = new MyTC[Option[String]]{
def map(f: Int => Option[String]): List[Option[String]] = { println("inside Option[String]"); List.empty }
}
Then, re-running the code will print:
inside Option[String]
inside sub-AnyRef
inside sub-AnyVal
inside sub-AnyVal
How to fix the code below so it works?
object Foo {
object sum extends Poly {
implicit def caseFoo = use((f1: Int, f2: Int) => f1 + f2)
}
def foo[L <: HList : <<:[Int]#λ](l: L): Int = {
l.reduceLeft(sum)
// Error: could not find implicit value for parameter reducer: shapeless.ops.hlist.LeftReducer[L,com.struct.Foo.sum.type]
// l.reduceLeft(sum)
// Error: not enough arguments for method reduceLeft: (implicit reducer: shapeless.ops.hlist.LeftReducer[L,com.struct.Foo.sum.type])reducer.Out.
// Unspecified value parameter reducer.
}
}
Additionally. How to change it so it reduces HList of any Numerics?
Update:
Please consider fuller example of my problem:
import shapeless._
trait Bar {
def foo: Int
}
case class Foo[L <: HList](l: L) extends Bar {
object sum extends Poly {
implicit def caseFoo[A: Numeric] = use((f1: A, f2: A) => f1 + f2)
//Error: type mismatch;
//found : A
//required: String
}
override def foo(implicit reducer: LeftReducer[L, sum.type]): Int = reducer(l)
//Error: type mismatch;
//found : reducer.Out
//required: Int
}
Update:
package com.test
import shapeless.ops.hlist.LeftReducer
import shapeless.{HList, HNil, Poly}
trait Bar {
def foo: Int
}
case class Foo[L <: HList](list: L) extends Bar {
import Numeric.Implicits._
object sum extends Poly {
implicit def caseFoo[A: Numeric] = use((f1: A, f2: A) => f1 + f2)
}
class MkFoo[T] {
def apply(l: L)(implicit reducer: LeftReducer.Aux[L, sum.type, T]): T = reducer(l)
}
override def foo: Int = (new MkFoo[Int]())(list)
// Error: could not find implicit value for parameter reducer: shapeless.ops.hlist.LeftReducer.Aux[L,Foo.this.sum.type,Int]
}
object Testing {
def main(args: Array[String]): Unit = {
val b:Bar = Foo(1 :: 2 :: 3 :: HNil)
println(b.foo)
}
}
The LUBConstraint.<<: type is unconstructive in sense that it could only be evident that all HList members have some type, but can not derive anything from it.
To use reduceLeft method you need specifically LeftReducer operation provider. You can require it in your method directly.
import shapeless._, ops.hlist._
object Foo {
import Numeric.Implicits._
object sum extends Poly {
implicit def caseFoo[A: Numeric] = use((f1: A, f2: A) => f1 + f2)
}
def foo[L <: HList](l: L)(implicit reducer: LeftReducer[L, sum.type]): reducer.Out =
reducer(l)
}
Foo.foo(1 :: 2 :: 3 :: HNil) // res0: Int = 6
Foo.foo(1.0 :: 2.0 :: 3.0 :: HNil) // res1: Double = 6.0
Update
If you need direct evidence, that your result will be of some result type you can use additional type argument, but that needs separation of type parameters via the Typed Maker pattern
object Foo {
import Numeric.Implicits._
object sum extends Poly {
implicit def caseFoo[A: Numeric] = use((f1: A, f2: A) => f1 + f2)
}
def foo[T] = new MkFoo[T]
class MkFoo[T] {
def apply[L <: HList](l: L)(implicit reducer: LeftReducer.Aux[L, sum.type, T]): T = reducer(l)
}
}
now
Foo.foo[Int](1 :: 2 :: 3 :: HNil)
Foo.foo[Double](1.0 :: 2.0 :: 3.0 :: HNil)
will still produce correct result, while
Foo.foo[Double](1 :: 2 :: 3 :: HNil)
will fail at compile time
This works:
package com.test
import shapeless.{::, Generic, HList, HNil, Lazy}
trait Bar {
def foo: Int
}
case class Foo[L <: HList](list: L)(implicit ev: SizeCalculator[L]) extends Bar {
override def foo: Int = ev.size(list)
}
object Testing {
def main(args: Array[String]): Unit = {
val b: Bar = Foo(1 :: 2 :: 3 :: HNil)
println(b.foo)
}
}
sealed trait SizeCalculator[T] {
def size(value: T): Int
}
object SizeCalculator {
// "Summoner" method
def apply[A](implicit enc: SizeCalculator[A]): SizeCalculator[A] = enc
// "Constructor" method
def instance[A](func: A => Int): SizeCalculator[A] = new SizeCalculator[A] {
def size(value: A): Int = func(value)
}
import Numeric.Implicits._
implicit def numericEncoder[A: Numeric]: SizeCalculator[A] = new SizeCalculator[A] {
override def size(value: A): Int = value.toInt()
}
implicit def hnilEncoder: SizeCalculator[HNil] = instance(hnil => 0)
implicit def hlistEncoder[H, T <: HList](
implicit
hInstance: Lazy[SizeCalculator[H]],
tInstance: SizeCalculator[T]
): SizeCalculator[H :: T] = instance {
case h :: t =>
hInstance.value.size(h) + tInstance.size(t)
}
implicit def genericInstance[A, R](
implicit
generic: Generic.Aux[A, R],
rInstance: Lazy[SizeCalculator[R]]
): SizeCalculator[A] = instance { value => rInstance.value.size(generic.to(value)) }
def computeSize[A](value: A)(implicit enc: SizeCalculator[A]): Int = enc.size(value)
}
I was kind of disappointed by the fact that scala compiler allows to compile this pretty wierd, obviously mistaken code:
val foo: PartialFunction[Any, Unit] = {
case s: String => println(s)
}
foo()
and instead of printing compile-error, it throws
Exception in thread "main" scala.MatchError: () (of class scala.runtime.BoxedUnit)
What was the reason for that?
In your case your partial function takes an Any argument, and that includes Unit (since unit is a subtype of Any - Any -> AnyVal -> Unit). Calling apply() on that is equal to calling apply(()).
If you have a partial function which doesn't accept Unit you get an error indicating that the argument for apply is missing:
scala> val foo : PartialFunction[AnyRef, Unit] = {
| case arg => println(s"arg = $arg")
| }
foo: PartialFunction[AnyRef,Unit] = <function1>
scala> foo()
<console>:13: error: not enough arguments for method apply: (v1: AnyRef)Unit in trait Function1.
Unspecified value parameter v1.
foo()
What was the reason for that?
Because when accepting a parameter of type Any, the compiler deduces that type of Unit is applicable in that case, and thus passes the Unit type to foo.apply of the PartialFunction:
def main(args: Array[String]): Unit = {
val foo: PartialFunction[Any,Unit] = ({
#SerialVersionUID(value = 0) final <synthetic> class $anonfun extends scala.runtime.AbstractPartialFunction[Any,Unit] with Serializable {
def <init>(): <$anon: Any => Unit> = {
$anonfun.super.<init>();
()
};
final override def applyOrElse[A1, B1 >: Unit](x1: A1, default: A1 => B1): B1 = ((x1.asInstanceOf[Any]: Any): Any #unchecked) match {
case (s # (_: String)) => scala.this.Predef.println(s)
case (defaultCase$ # _) => default.apply(x1)
};
final def isDefinedAt(x1: Any): Boolean = ((x1.asInstanceOf[Any]: Any): Any #unchecked) match {
case (s # (_: String)) => true
case (defaultCase$ # _) => false
}
};
new $anonfun()
}: PartialFunction[Any,Unit]);
foo.apply(())
}
If the type restriction was narrower, i.e AnyRef, you'd see a compiler error as Unit inherits AnyVal, and now the compiler can't provide any implicit "help".
Given the following from Travis Brown's educational and well-written Type classes and generic derivation:
case class Person(name: String, age: Double)
trait Parser[A] {
def apply(s: String): Option[A]
}
implicit val hnilParser: Parser[HNil] = new Parser[HNil] {
def apply(s: String): Option[HNil] = if(s.isEmpty) Some(HNil) else None
}
implicit def hconsParser[H: Parser, T <: HList: Parser]: Parser[H :: T] = new Parser[H :: T] {
def apply(s: String): Option[H :: T] = s.split(",").toList match {
case cell +: rest => for {
head <- implicitly[Parser[H]].apply(cell)
tail <- implicitly[Parser[T]].apply(rest.mkString(","))
} yield head :: tail
}
}
implicit val stringParser: Parser[String] = new Parser[String] {
def apply(s: String): Option[String] = Some(s)
}
implicit val intParser: Parser[Int] = new Parser[Int] {
def apply(s: String): Option[Int] = Try(s.toInt).toOption
}
implicit val doubleParser: Parser[Double] = new Parser[Double] {
def apply(s: String): Option[Double] = Try(s.toDouble).toOption
}
implicit val booleanParser: Parser[Boolean] = new Parser[Boolean] {
def apply(s: String): Option[Boolean] = Try(s.toBoolean).toOption
}
implicit def caseClassParser[A, R <: HList](implicit gen: Generic[A] { type Repr = R },
reprParser: Parser[R]): Parser[A] =
new Parser[A] {
def apply(s: String): Option[A] = reprParser.apply(s).map(gen.from)
}
object Parser {
def apply[A](s: String)(implicit parser: Parser[A]): Option[A] = parser(s)
}
implicit val stringParser: Parser[String] = new Parser[String] {
def apply(s: String): Option[String] = Some(s)
}
implicit val intParser: Parser[Int] = new Parser[Int] {
def apply(s: String): Option[Int] = Try(s.toInt).toOption
}
implicit val doubleParser: Parser[Double] = new Parser[Double] {
def apply(s: String): Option[Double] = Try(s.toDouble).toOption
}
I was curious to try to get a Parser[X] where X is a case class with a Person argument, i.e. a case class:
case class PersonWrapper(person: Person, x: Int)
Yet I get an error:
scala> Parser[PersonWrapper]("kevin,66,42")
<console>:15: error: diverging implicit expansion for type net.Test.Parser[net.Test.PersonWrapper]
starting with method caseClassParser in object Test
Parser[PersonWrapper]("kevin,66,42")
^
First, why does this divergent implicit error occur?
Secondly, is it possible to use the above code to get a Parser[PersonWrapper]?
Secondly, is it possible to use the above code to get a Parser[PersonWrapper]?
No, just skip to the end of the article:
scala> case class BookBook(b1: Book, b2: Book)
defined class BookBook
scala> Parser[BookBook]("Hamlet,Shakespeare")
res7: Option[BookBook] = None
Our format doesn’t support any kind of nesting (at least we haven’t said anything about nesting, and it wouldn’t be trivial), so we don’t actually know how to parse a string into a BookBook...
The problem is that cell in case cell +: rest will only ever be a string with no commas that is passed to implicitly[Parser[H]].apply(cell). For PersonWrapper, this means that the very first cell will attempt to do this:
implicitly[Parser[PersonWrapper]].apply("kevin")
Which will obviously fail to parse. In order to get nested parsers to work, you would need some way to group the cells together prior to applying a Parser[H] to them.