Is there a less-verbose way to achieve this?
case class MyClass(
a: A,
b: B,
c: C,
...
)
def updatedFromString(m: MyClass, field: String, value: String) = field match {
case "A" => m.withA(value)
case "B" => m.withB(value)
case "C" => m.withC(value)
...
}
implicit class FromStrings(m: MyClass) {
def withA(v: String) = m.copy(a = A.fromString(v))
def withB(v: String) = m.copy(b = B.fromString(v))
def withC(v: String) = m.copy(c = C.fromString(v))
...
}
MyClass has a lot of fields - a,b,c, etc - all of which are instances of different case classes.
This leads to a lot of case statements above and a lot of updater methods named withXXX, which look fairly repetitive.
You could extract the logic:
// repetitive and generic enough to make it easier to generate
val setters: Map[String, String => MyClass => MyClass] = Map(
"A" -> (v => _.copy(a => A.fromString(v)),
"B" -> (v => _.copy(b => B.fromString(v)),
"C" -> (v => _.copy(c => C.fromString(v)),
...
)
def updatedFromString(m: MyClass, field: String, value: String) =
setters(field)(value)(m)
If it is still too much, you could generate setters using macros or runtime reflection, but I am not sure it is worth the effort.
EDIT: An alternative solution which changes how you deal with code:
sealed trait PatchedField
object PatchedField {
// field names corresponding to names from MyClass
case class FieldA(a: A) extends PatchedField
case class FieldB(b: B) extends PatchedField
...
}
// removed stringiness and creates some type-level information
def parseKV(key: String, value: String): PatchedField = key match {
case "A" => FieldA(A.fromString(v))
case "B" => FieldB(B.fromString(v))
...
}
import io.scalaland.chimney.dls._
def updatedFromString(m: MyClass, field: String, value: String) =
parse(field, value) match {
// performs exhaustivity check
case fieldA: FieldA => m.patchUsing(fieldA)
case fieldB: FieldB => m.patchUsing(fieldB)
...
}
If you don't like it... well, then you have to write you own macro, very obscure shapeless and/or codegen:
there is no way you can generate x.copy(y = z) without a macro, even if some library does it, it does it using a macro underneath. At best you could use some lens library, but AFAIK no lens library out of the box would provide you a Setter for a field by singleton type of the field name (that is without writing something like Lens[Type](_.field) explicitly) - that I believe would be doable with some Shapeless black magic mapping over LabelledGenerics
you might still need to convert a singleton type into A singleton type in compile time - that I am not sure if it is possible in Shapeless so you might need to push it down to value level, summoning a Witness, and then .toUpperCaseing it
you would have to make each field aware of Type.fromString functionality - is it different for each field by its name or my its type? If the latter - you could use a normal parser typeclass. If the former, this typeclass would have to be dependently typed for a singleton type with a field name. Either way you would most likely have to define these typeclasses yourself
then you would have to combine all of that together
It could be easier if you did it in runtime (scanning classes for some method and fields) instead of compile time... but you would have no checks that conversion actually exists for a field string to its value, type erasure would kick in (Option[Int] and Option[String] being the same thing, null no being anything).
With compile time approach you would have to at least define a typeclass per type, and then manually create the code that would put it all together. With some fast prototyping I arrived at:
import shapeless._
import shapeless.labelled._
trait StringParser[A] { def parse(string: String): A }
object StringParser {
implicit val string: StringParser[String] = s => s
implicit val int: StringParser[Int] = s => java.lang.Integer.parseInt(s).toInt
implicit val double: StringParser[Double] = s => java.lang.Double.parseDouble(s).toDouble
// ...
}
trait Mapper[X] { def mapper(): Map[String, StringParser[_]] }
object Mapper {
implicit val hnilMapper: Mapper[HNil] = () => Map.empty
implicit def consMapper[K <: Symbol, H, Repr <: HList](
implicit
key: Witness.Aux[K],
parser: StringParser[H],
mapper: Mapper[Repr]
): Mapper[FieldType[K, H] :: Repr] = () => mapper.mapper() + (key.value.name -> (parser : StringParser[_]))
implicit def hlistMapper[T, Repr <: HList](
implicit gen: LabelledGeneric.Aux[T, Repr],
mapper: Mapper[Repr]
): Mapper[T] = () => mapper.mapper()
def apply[T](implicit mapper: Mapper[T]): Map[String, StringParser[_]] = mapper.mapper()
}
val mappers = Mapper[MyClass]
Which you could use like:
convert a field String to an actual field name
extract a parser from the map using field name
pass the value to the parser
use runtime reflection to simulate copy or generate copy calls using macros
The last part simply cannot be done "magically" - as far as I am aware, there is no library where you would require an implicit Lens[Type, fieldName] and obtain Lens[Type, fieldName] { type Input; def setter(input: Input): Type => Type }, so there is nothing which would generate that .copy for you. As a result it would require some form of manually written reflection.
If you want to have compile-time safety at this step, you might as well do the rest compile-time safe as well and implement everything as a macro which verifies the presence of the right typeclasses and things.
Related
I want to build a simple library in which a developer can define a Scala class that represents command line arguments (to keep it simple, just a single set of required arguments -- no flags or optional arguments). I'd like the library to parse the command line arguments and return an instance of the class. The user of the library would just do something like this:
case class FooArgs(fluxType: String, capacitorCount: Int)
def main(args: Array[String]) {
val argsObject: FooArgs = ArgParser.parse(args).as[FooArgs]
// do real stuff
}
The parser should throw runtime errors if the provided arguments do not match the expected types (e.g. if someone passes the string "bar" at a position where an Int is expected).
How can I dynamically build a FooArgs without knowing its shape in advance? Since FooArgs can have any arity or types, I don't know how to iterate over the command line arguments, cast or convert them to the expected types, and then use the result to construct a FooArgs. Basically, I want to do something along these lines:
// ** notional code - does not compile **
def parse[T](args: Seq[String], klass: Class[T]): T = {
val expectedTypes = klass.getDeclaredFields.map(_.getGenericType)
val typedArgs = args.zip(expectedTypes).map({
case (arg, String) => arg
case (arg, Int) => arg.toInt
case (arg, unknownType) =>
throw new RuntimeException(s"Unsupported type $unknownType")
})
(klass.getConstructor(typedArgs).newInstance _).tupled(typedArgs)
}
Any suggestions on how I can achieve something like this?
When you want to abstract over case class (or Tuple) shape, the standard approach is to get the HList representation of the case class with the help of shapeless library. HList keeps track in its type signature of the types of its elements and their amount. Then you can implement the algorithm you want recursively on the HList. Shapeless also provides a number of helpful transformations of HLists in shapeless.ops.hlist.
For this problem, first we need to define an auxiliary typeclass to parse an argument of some type from String:
trait Read[T] {
def apply(str: String): T
}
object Read {
def make[T](f: String => T): Read[T] = new Read[T] {
def apply(str: String) = f(str)
}
implicit val string: Read[String] = make(identity)
implicit val int: Read[Int] = make(_.toInt)
}
You can define more instances of this typeclass, if you need to support other argument types than String or Int.
Then we can define the actual typeclass that parses a sequence of arguments into some type:
// This is needed, because there seems to be a conflict between
// HList's :: and the standard Scala's ::
import shapeless.{:: => :::, _}
trait ParseArgs[T] {
def apply(args: List[String]): T
}
object ParseArgs {
// Base of the recursion on HList
implicit val hnil: ParseArgs[HNil] = new ParseArgs[HNil] {
def apply(args: List[String]) =
if (args.nonEmpty) sys.error("too many args")
else HNil
}
// A single recursion step on HList
implicit def hlist[T, H <: HList](
implicit read: Read[T], parseRest: ParseArgs[H]
): ParseArgs[T ::: H] = new ParseArgs[T ::: H] {
def apply(args: List[String]) = args match {
case first :: rest => read(first) :: parseRest(rest)
case Nil => sys.error("too few args")
}
}
// The implementation for any case class, based on its HList representation
implicit def caseClass[C, H <: HList](
implicit gen: Generic.Aux[C, H], parse: ParseArgs[H]
): ParseArgs[C] = new ParseArgs[C] {
def apply(args: List[String]) = gen.from(parse(args))
}
}
And lastly we can define some API, that uses this typeclass. For example:
case class ArgParser(args: List[String]) {
def to[C](implicit parseArgs: ParseArgs[C]): C = parseArgs(args)
}
object ArgParser {
def parse(args: Array[String]): ArgParser = ArgParser(args.toList)
}
And a simple test:
scala> ArgParser.parse(Array("flux", "10")).to[FooArgs]
res0: FooArgs = FooArgs(flux,10)
There is a great guide on using shapeless for solving similar problems, which you may find helpful: The Type Astronaut’s Guide to Shapeless
I have a situation where I need a method that can take in types:
Array[Int]
Array[Array[Int]]
Array[Array[Array[Int]]]
Array[Array[Array[Array[Int]]]]
etc...
let's call this type RAI for "recursive array of ints"
def make(rai: RAI): ArrayPrinter = { ArrayPrinter(rai) }
Where ArrayPrinter is a class that is initialized with an RAI and iterates through the entire rai (let's say it prints all the values in this Array[Array[Int]])
val arrayOfArray: Array[Array[Int]] = Array(Array(1, 2), Array(3, 4))
val printer: ArrayPrinter[Array[Array[Int]]] = make(arrayOfArray)
printer.print_! // prints "1, 2, 3, 4"
It can also return the original Array[Array[Int]] without losing any type information.
val arr: Array[Array[Int]] = printer.getNestedArray()
How do you implement this in Scala?
Let's first focus on type. According to your definition, a type T should typecheck as an argument for ArrayPrinter is it accepted by the following type function:
def accept[T]: Boolean =
T match { // That's everyday business in agda
case Array[Int] => true
case Array[X] => accept[X]
case _ => false
}
In Scala, you can encode that type function using implicit resolution:
trait RAI[T]
object RAI {
implicit val e0: RAI[Array[Int]] = null
implicit def e1[T](implicit i: RAI[T]): RAI[Array[T]] = null
}
case class ArrayPrinter[T: RAI](getNestedArray: T) // Only compiles it T is a RAI
To print things the simplest solution is to treat the rai: T as a rai: Any:
def print_!: Unit = {
def print0(a: Any): Unit = a match {
case a: Int => println(a)
case a: Array[_] => a.foreach(print0)
case _ => ???
}
}
You could also be fancy and write print_! using type classes, but that would probably be less efficient and take more time to write than the above... Left as an exercise for the reader ;-)
The way this is typically done is by defining an abstract class that contains all the functionality that you would want related to this recursive type, but does not actually take any constructor arguments. Rather, all of its methods take (at least one of) the type as an argument. The canonical example would be Ordering. Define one or more implicit implementations of this class, and then any time you need to use it, accept it as an implicit parameter. The corresponding example would be List's sorted method.
In your case, this might look like:
abstract class ArrayPrinter[A] {
def mkString(a: A): String
}
implicit object BaseArrayPrinter extends ArrayPrinter[Int] {
override def mkString(x: Int) = x.toString
}
class WrappedArrayPrinter[A](wrapped: ArrayPrinter[A]) extends ArrayPrinter[Array[A]] {
override def mkString(xs: Array[A]) = xs.map(wrapped.mkString).mkString(", ")
}
implicit def makeWrappedAP[A](implicit wrapped: ArrayPrinter[A]): ArrayPrinter[Array[A]] = new WrappedArrayPrinter(wrapped)
def printHello[A](xs: A)(implicit printer: ArrayPrinter[A]): Unit = {
println("hello, array: " + printer.mkString(xs))
}
This tends to be a bit cleaner than having that RAIOps class (or ArrayPrinter) take in an object as part of its constructor. That usually leads to more "boxing" and "unboxing", complicated type signatures, strange pattern matching, etc.
It also has the added benefit of being easier to extend. If later someone else has a reason to want an implementation of ArrayPrinter for a Set[Int], they can define it locally to their code. I have many times defined a custom Ordering.
I am writing simple variable system in Scala. I need a way to nicely hold these AnyVars and access them by their string names. I have this object called context to manage it but it is really just a wrapper for HashMap[String, AnyVar]. I want to find a nicer way to do this.
class Context {
import scala.collection.mutable.HashMap
import scala.reflect.ClassTag
private var variables = HashMap[String, AnyVal] ()
def getVariable (name: String):AnyVal = variables(name)
def setVariable (name: String, value: AnyVal) = variables += ((name, value))
def getVariableOfType[T <: AnyVal : ClassTag] (name:String):T ={
val v = variables(name)
v match {
case x: T => x
case _ => null.asInstanceOf[T]
}
}
}
I really want an implementation that is type safe
You are defining the upper bound of your variables as AnyVal. These are mostly primitive types such as Int, Boolean etc. not reference types for which null makes sense. Therefore you have to include a cast to T. This doesn't give you a null reference but the default value for that value type, such as 0 for Int, false for Boolean:
null.asInstanceOf[Int] // 0
null.asInstanceOf[Boolean] // false
So this has nothing to do with type-safety but just the fact that Scala would otherwise refuse to use a null value here. Your pattern match against x: T is already type-safe.
A better approach would be to return an Option[T] or to throw a NoSuchElementException if you try to query a non-existing variable or the wrong type.
Be aware that using a plain hash-map without synchronisation means your Context is not thread safe.
Pattern matching with abstract types is difficult, and more so with instances of AnyVal. I found an approach which works at ClassTag based pattern matching fails for primitives
In general for your problem, I would simply use a Map[String, AnyVal] with an implicit extension to give you getOfType. Then you can use the existing methods to put and get variables as you wish. You can use a type alias to give your type a more meaningful reference.
object Context {
import scala.reflect.ClassTag
import scala.runtime.ScalaRunTime
type Context = Map[String, AnyVal]
def apply(items: (String, AnyVal)*): Context = items.toMap
implicit class ContextMethods(c: Context) {
class MyTag[A](val t: ClassTag[A]) extends ClassTag[A] {
override def runtimeClass = t.runtimeClass
override def unapply(x: Any): Option[A] =
if (t.runtimeClass.isPrimitive && (ScalaRunTime isAnyVal x) &&
(x.getClass getField "TYPE" get null) == t.runtimeClass)
Some(x.asInstanceOf[A])
else t unapply x
}
def getOfType[T <: AnyVal](key: String)(implicit t: ClassTag[T]): Option[T] = {
implicit val u = new MyTag(t)
c.get(key).flatMap {
case x: T => Some(x: T)
case _ => None
}
}
}
}
With this stuff collected together in an object which you can import, use of it becomes straightforward:
import Context._
val context = Context(
"int_var" -> 123,
"long_var" -> 456l,
"double_var" -> 23.5d
)
context.get("int_var")//Some(123)
context.get("long_var")//Some(456)
context.getOfType[Int]("int_var")//Some(123)
context.getOfType[Double]("double_var")//Some(23.5)
context.getOfType[Int]("double_var")//None
I am trying to use Scala macros to create a case class map of single-parameter copy methods, with each method accepting a Play Json JsValue and a case class instance, and returning an updated copy of the instance. However, I am running into problems with the macro syntax for returning a function object.
Given a case class
case class Clazz(id: Int, str: String, strOpt: Option[String])
the intention is to create a map of the class's copy methods
implicit def jsonToInt(json: JsValue) = json.as[Int]
implicit def jsonToStr(json: JsValue) = json.as[String]
implicit def jsonToStrOpt(json: JsValue) = json.asOpt[String]
Map("id" -> (json: JsValue, clazz: Clazz) = clazz.copy(id = json),
"str" -> (json: JsValue, clazz: Clazz) = clazz.copy(str = json), ...)
I have found two related questions:
Using macros to create a case class field map: Scala Macros: Making a Map out of fields of a class in Scala
Accessing the case class copy method using a macro: Howto model named parameters in method invocations with Scala macros?
...but I am stuck at how I can create a function object so that I can return a Map[String, (JsValue, T) => T]
Edit: Thanks to Eugene Burmako's suggestion to use quasiquotes - this is where I'm currently at using Scala 2.11.0-M7, basing my code on Jonathan Chow's post (I switched from using (T, JsValue) => T to (T, String) => T to simplify my REPL imports)
Edit2: Now incorporating $tpe splicing
import scala.language.experimental.macros
implicit def strToInt(str: String) = str.toInt
def copyMapImpl[T: c.WeakTypeTag](c: scala.reflect.macros.Context):
c.Expr[Map[String, (T, String) => T]] = {
import c.universe._
val tpe = weakTypeOf[T]
val fields = tpe.declarations.collectFirst {
case m: MethodSymbol if m.isPrimaryConstructor => m
}.get.paramss.head
val methods = fields.map { field => {
val name = field.name
val decoded = name.decoded
q"{$decoded -> {(t: $tpe, str: String) => t.copy($name = str)}}"
}}
c.Expr[Map[Sring, (T, String) => T]] {
q"Map(..$methods)"
}
}
def copyMap[T]: Map[String, (T, String) => T] = macro copyMapImpl[T]
case class Clazz(i: Int, s: String)
copyMap[Clazz]
You got almost everything right in your code, except for the fact that you need to splice T into a quasiquote, i.e. to write $tpe instead of just T.
For that to look more natural, I usually explicitly declare type tag evidences in macros, e.g. def foo[T](c: Context)(implicit T: c.WeakTypeTag[T]) = .... After that I just write $T, and it looks almost fine :)
You might ask why quasiquotes can't just figure out that in the place where they're written T refers to the type parameter of a macro and then automatically splice it in. That would be very reasonable question, actually. In languages like Racket and Scheme, quasiquotes are smart enough to remember things about the lexical context they are written in, but in Scala this is a bit more difficult, because there are so many different scopes in the language. Yet, there's a plan to get there, and research in that direction in already underway: https://groups.google.com/forum/#!topic/scala-language/7h27npd1DKI.
Apparently unapply/unapplySeq in extractor objects do not support implicit parameters. Assuming here an interesting parameter a, and a disturbingly ubiquitous parameter b that would be nice to hide away, when extracting c.
[EDIT]: It appears something was broken in my intellij/scala-plugin installation that caused this. I cannot explain. I was having numerous strange problems with my intellij lately. After reinstalling, I can no longer reprodce my problem. Confirmed that unapply/unapplySeq do allow for implicit parameters! Thanks for your help.
This does not work (**EDIT:yes, it does):**
trait A; trait C; trait B { def getC(a: A): C }
def unapply(a:A)(implicit b:B):Option[C] = Option(b.getC(a))
In my understanding of what an ideal extractor should be like, in which the intention is intuitively clear also to Java folks, this limitation basically forbids extractor objects which depend on additional parameter(s).
How do you typically handle this limitation?
So far I've got those four possible solutions:
1) The simplest solution that I want to improve on: don't hide b, provide parameter b along with a, as normal parameter of unapply in form of a tuple:
object A1{
def unapply(a:(A,B)):Option[C] = Option(a._2.getC(a._1)) }
in client code:
val c1 = (a,b) match { case A1(c) => c1 }
I don't like it because there is more noise deviating that deconstruction of a into c is important here. Also since java folks, that have to be convinced to actually use this scala code, are confronted with one additional synthactic novelty (the tuple braces). They might get anti-scala aggressions "What's all this? ... Why then not use a normal method in the first place and check with if?".
2) define extractors within a class encapsulating the dependence on a particular B, import extractors of that instance. At import site a bit unusual for java folks, but at pattern match site b is hidden nicely and it is intuitively evident what happens. My favorite. Some disadvantage I missed?
class BDependent(b:B){
object A2{
def unapply(a:A):Option[C] = Option(b.getC(a))
} }
usage in client code:
val bDeps = new BDependent(someB)
import bDeps.A2
val a:A = ...
val c2 = a match { case A2(c) => c }
}
3) declare extractor objects in scope of client code. b is hidden, since it can use a "b" in local scope. Hampers code reuse, heavily pollutes client code (additionally, it has to be stated before code using it).
4) have unapply return Option of function B => C. This allows import and usage of an ubitious-parameter-dependent extractor, without providing b directly to the extractor, but instead to the result when used. Java folks maybe confused by usage of function values, b not hidden:
object A4{
def unapply[A,C](a:A):Option[B => C] = Option((_:B).getC(a))
}
then in client code:
val b:B = ...
val soonAC: B => C = a match { case A4(x) => x }
val d = soonAC(b).getD ...
Further remarks:
As suggested in this answer, "view bounds" may help to get extractors work with implicit conversions, but this doesn't help with implicit parameters. For some reason I prefer not to workaround with implicit conversions.
looked into "context bounds", but they seem to have the same limitation, don't they?
In what sense does your first line of code not work? There's certainly no arbitrary prohibition on implicit parameter lists for extractor methods.
Consider the following setup (I'm using plain old classes instead of case classes to show that there's no extra magic happening here):
class A(val i: Int)
class C(val x: String)
class B(pre: String) { def getC(a: A) = new C(pre + a.i.toString) }
Now we define an implicit B value and create an extractor object with your unapply method:
implicit val b = new B("prefix: ")
object D {
def unapply(a: A)(implicit b: B): Option[C] = Option(b getC a)
}
Which we can use like this:
scala> val D(c) = new A(42)
c: C = C#52394fb3
scala> c.x
res0: String = prefix: 42
Exactly as we'd expect. I don't see why you need a workaround here.
The problem you have is that implicit parameters are compile time (static) constraints, whereas pattern matching is a runtime (dynamic) approach.
trait A; trait C; trait B { def getC(a: A): C }
object Extractor {
def unapply(a: A)(implicit b: B): Option[C] = Some(b.getC(a))
}
// compiles (implicit is statically provided)
def withImplicit(a: A)(implicit b: B) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
// does not compile
def withoutImplicit(a: A) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
So this is a conceptual problem, and the solution depends on what you actually want to achieve. If you want something along the lines of an optional implicit, you might use the following:
sealed trait FallbackNone {
implicit object None extends Optional[Nothing] {
def toOption = scala.None
}
}
object Optional extends FallbackNone {
implicit def some[A](implicit a: A) = Some(a)
final case class Some[A](a: A) extends Optional[A] {
def toOption = scala.Some(a)
}
}
sealed trait Optional[+A] { def toOption: Option[A]}
Then where you had implicit b: B you will have implicit b: Optional[B]:
object Extractor {
def unapply(a:A)(implicit b: Optional[B]):Option[C] =
b.toOption.map(_.getC(a))
}
def test(a: A)(implicit b: Optional[B]) : Option[C] = a match {
case Extractor(c) => Some(c)
case _ => None
}
And the following both compile:
test(new A {}) // None
{
implicit object BImpl extends B { def getC(a: A) = new C {} }
test(new A {}) // Some(...)
}