Safe / Unsafe pattern in scala - scala

Hi I have class which does some json extraction and want to do a safe/unsafe version.
Currently I have a class definition like this
class Safe {
def getA: Option[String] = ...
def getB: Option[Int] = ...
... etc ...
}
And then a Unsafe version of that which just delegates to the Safe class:
class Unsafe(delegate: Safe) {
def getA: String = delegate.getA.get
def getB: Int = delegate.getB.get
... etc ...
}
this works but obviously the main problem is that the delegation is maintained by hand, and that if we ever change anything about the Safe interface, someone has to manually also make sure that is reflected in the Unsafe class as well.
Is there a more idomatic and less manual pattern in scala to do this?

Here is an implementation of my proposal.
Define Extractor interface parameterized by a type constructor:
trait Extractor[F[_]] { outer =>
def getA: F[String]
def getB: F[Int]
/* insert `transform` here */
}
Implement a transform method once and for all, which takes an arbitrary F ~> G:
def transform[G[_]](natTrafo: F ~> G): Extractor[G] =
new Extractor[G] {
def getA: G[String] = natTrafo[String](outer.getA)
def getB: G[Int] = natTrafo[Int](outer.getB)
}
Here, the F ~> G is a type of polymorphic functions that can transform any F[A] into G[A] for arbitrary type A (String, Int, or thousand other types you want to get in your extractor):
trait ~>[F[_], G[_]] {
def apply[A](fa: F[A]): G[A]
}
This interface is quite ubiquitous, it's available in Scalaz and Cats (it's called FunctionK there), sometimes called "natural transformation".
Implement SafeExtractor:
class SafeExtractor extends Extractor[Option] {
def getA: Option[String] = None /* do sth. more clever here? */
def getB: Option[Int] = None
}
Get UnsafeExtractor for free by providing a simple Option ~> Id implementation to transform:
type Id[X] = X
val safe: Extractor[Option] = new SafeExtractor()
val unsafe: Extractor[Id] = safe.transform(
new ~>[Option, Id] {
def apply[A](x: Option[A]): Id[A] = x.get
}
)
You can now also easily reuse the same transform function to convert Extractor[Future] to Extractor[Id] by Awaiting results, or Extractor[Id] to Extractor[Try] by catching all errors etc.
Full code
import scala.language.higherKinds
trait Extractor[F[_]] { outer =>
def getA: F[String]
def getB: F[Int]
def transform[G[_]](natTrafo: F ~> G): Extractor[G] =
new Extractor[G] {
def getA: G[String] = natTrafo[String](outer.getA)
def getB: G[Int] = natTrafo[Int](outer.getB)
}
}
/** A polymorphic function that can transform any
* `F[A]` into a `G[A]` for all possible `A`.
*/
trait ~>[F[_], G[_]] {
def apply[A](fa: F[A]): G[A]
}
class SafeExtractor extends Extractor[Option] {
def getA: Option[String] = None
def getB: Option[Int] = None
}
type Id[X] = X
val safe: Extractor[Option] = new SafeExtractor()
val unsafe: Extractor[Id] = safe.transform(
new ~>[Option, Id] {
def apply[A](x: Option[A]): Id[A] = x.get
}
)

The answer by Andrey Tyukin is a good way to address this specific problem, but there is perhaps a bigger design issue.
Your code assumes that the structure of the JSON (as defined in Safe) matches the structure of the data in your code (as defined by Unsafe). This causes problems when you want to change the structure of the data in your code, or when the JSON format changes, or when you want to import data from a different source (e.g. XML), or when you want to do more complex validation of the data.
So the correct way to handle this is to design the application data structures (your Unsafe class) in the way that fits your application. You then provide a JSON-reading library that converts incoming data to that format. This library uses your Safe class internally. This can perform any validation/data conditioning that may be required, and can adjust to changes in the JSON format without affecting the rest of the system. Your unit testing framework will make sure that your two classes stay in sync.
This design pattern is an example of separation of concerns
Here is some sample code:
In the main application:
// Data structure for use by application
case class Unsafe(a: String, b: Int)
// Abstract interface for loading data
trait Loader {
def load(): Unsafe
}
In the data loading library:
// JSON implementation of loading interface
object JsonLoader extends Loader {
protected case class Safe(
a: Option[String],
b: Option[Int]
)
def load(): Unsafe = {
val json = rawLibrary.readData() // Load data in JSON format
val safe: Safe = jsonLibrary.extract[Safe](json)
// Validate/condition the raw data here
val a = safe.a.getOrElse("")
val b = safe.b.getOrElse(0)
// Return the application data
Unsafe(a, b)
}
}
The mapping from JSON to application data is hidden inside the JsonLoader object. This makes it easier to make sure that they are in sync allows the JSON format to change without affecting the wider code.

Related

Transform F[A] to Future[A]

I have a repository:
trait CustomerRepo[F[_]] {
def get(id: Identifiable[Customer]): F[Option[CustomerWithId]]
def get(): F[List[CustomerWithId]]
}
I have an implementation for my database which uses Cats IO, so I have a CustomerRepoPostgres[IO].
class CustomerRepoPostgres(xa: Transactor[IO]) extends CustomerRepo[IO] {
import doobie.implicits._
val makeId = IO {
Identifiable[Customer](UUID.randomUUID())
}
override def get(id: Identifiable[Customer]): IO[Option[CustomerWithId]] =
sql"select id, name, active from customer where id = $id"
.query[CustomerWithId].option.transact(xa)
override def get(): IO[List[CustomerWithId]] =
sql"select id, name, active from customer"
.query[CustomerWithId].to[List].transact(xa)
}
Now, I want to use a library which cannot deal with arbitrary holder types (it only supports Future). So I need a CustomerRepoPostgres[Future].
I thought to write some bridge code which can convert my CustomerRepoPostgres[IO] to CustomerRepoPostgres[Future]:
class RepoBridge[F[_]](repo: CustomerRepo[F])
(implicit convertList: F[List[CustomerWithId]] => Future[List[CustomerWithId]],
convertOption: F[Option[CustomerWithId]] => Future[Option[CustomerWithId]]) {
def get(id: Identifiable[Customer]): Future[Option[CustomerWithId]] = repo.get(id)
def get(): Future[List[CustomerWithId]] = repo.get()
}
I don't like that this approach requires implicit converters for every type used in the repository. Is there a better way to do this?
This is exactly what the tagless final approach is for, to abstract over F by requiring it to follow some specific type constraints. For example, let's create a custom implementation which requires F to be an Applicative:
trait CustomerRepo[F[_]] {
def get(id: Identifiable[Customer]): F[Option[CustomerWithId]]
def get(): F[List[CustomerWithId]]
}
class CustorRepoImpl[F[_]](implicit A: Applicative[F]) extends CustomerRepo[F] {
def get(id: Identifiable[Customer]): F[Option[CustomerWithId]] {
A.pure(???)
}
def get(): F[List[CustomerWithId]] = {
A.pure(???)
}
}
This way, no matter the concrete type of F, if it has an instance of Applicative[F] then you'll be good to go, with no need to define any transformers.
The way we do this is just put the relevant constraints on F according to the processing we need to do. If we need a sequential computation, we can use a Monad[F] and then flatMap the results. If no sequentiality is needed, Applicative[F] might be strong enough for this.

Class type required but E found (scala macro)

I'm trying to remove some of the boilerplate in an API I am writing.
Roughly speaking, my API currently looks like this:
def toEither[E <: WrapperBase](priority: Int)(implicit factory: (String, Int) => E): Either[E, T] = {
val either: Either[String, T] = generateEither()
either.left.map(s => factory(s, priority))
}
Which means that the user has to generate an implicit factory for every E used. I am looking to replace this with a macro that gives a nice compile error if the user provided type doesn't have the correct ctor parameters.
I have the following:
object GenericFactory {
def create[T](ctorParams: Any*): T = macro createMacro[T]
def createMacro[T](c: blackbox.Context)(ctorParams: c.Expr[Any]*)(implicit wtt: WeakTypeType[T]): c.Expr[T] = {
import c.universe._
c.Expr[T](q"new $wtt(..$ctorParams)")
}
}
If I provide a real type to this GenericFactory.create[String]("hey") I have no issues, but if I provide a generic type: GenericFactory.create[E]("hey") then I get the following compile error: class type required by E found.
Where have I gone wrong? Or if what I want is NOT possible, is there anything else I can do to reduce the effort for the user?
Sorry but I don't think you can make it work. The problem is that Scala (as Java) uses types erasure. It means that there is only one type for all generics kinds (possibly except for value-type specializations which is not important now). It means that the macro is expanded only once for all E rather then one time for each E specialization provided by the user. And there is no way to express a restriction that some generic type E must have a constructor with a given signature (and if there were - you wouldn't need you macro in the first place). So obviously it can not work because the compiler can't generate a constructor call for a generic type E. So what the compiler says is that for generating a constructor call it needs a real class rather than generic E.
To put it otherwise, macro is not a magic tool. Using macro is just a way to re-write a piece of code early in the compiler processing but then it will be processed by the compiler in a usual way. And what your macro does is rewrites
GenericFactory.create[E]("hey")
with something like
new E("hey")
If you just write that in your code, you'll get the same error (and probably will not be surprised).
I don't think you can avoid using your implicit factory. You probably could modify your macro to generate those implicit factories for valid types but I don't think you can improve the code further.
Update: implicit factory and macro
If you have just one place where you need one type of constructors I think the best you can do (or rather the best I can do ☺) is following:
Sidenote the whole idea comes from "Implicit macros" article
You define StringIntCtor[T] typeclass trait and a macro that would generate it:
import scala.language.experimental.macros
import scala.reflect.macros._
trait StringIntCtor[T] {
def create(s: String, i: Int): T
}
object StringIntCtor {
implicit def implicitCtor[T]: StringIntCtor[T] = macro createMacro[T]
def createMacro[T](c: blackbox.Context)(implicit wtt: c.WeakTypeTag[T]): c.Expr[StringIntCtor[T]] = {
import c.universe._
val targetTypes = List(typeOf[String], typeOf[Int])
def testCtor(ctor: MethodSymbol): Boolean = {
if (ctor.paramLists.size != 1)
false
else {
val types = ctor.paramLists(0).map(sym => sym.typeSignature)
(targetTypes.size == types.size) && targetTypes.zip(types).forall(tp => tp._1 =:= tp._2)
}
}
val ctors = wtt.tpe.decl(c.universe.TermName("<init>"))
if (!ctors.asTerm.alternatives.exists(sym => testCtor(sym.asMethod))) {
c.abort(c.enclosingPosition, s"Type ${wtt.tpe} has no constructor with signature <init>${targetTypes.mkString("(", ", ", ")")}")
}
// Note that using fully qualified names for all types except imported by default are important here
val res = c.Expr[StringIntCtor[T]](
q"""
(new so.macros.StringIntCtor[$wtt] {
override def create(s:String, i: Int): $wtt = new $wtt(s, i)
})
""")
//println(res) // log the macro
res
}
}
You use that trait as
class WrapperBase(val s: String, val i: Int)
case class WrapperChildGood(override val s: String, override val i: Int, val float: Float) extends WrapperBase(s, i) {
def this(s: String, i: Int) = this(s, i, 0f)
}
case class WrapperChildBad(override val s: String, override val i: Int, val float: Float) extends WrapperBase(s, i) {
}
object EitherHelper {
type T = String
import scala.util._
val rnd = new Random(1)
def generateEither(): Either[String, T] = {
if (rnd.nextBoolean()) {
Left("left")
}
else {
Right("right")
}
}
def toEither[E <: WrapperBase](priority: Int)(implicit factory: StringIntCtor[E]): Either[E, T] = {
val either: Either[String, T] = generateEither()
either.left.map(s => factory.create(s, priority))
}
}
So now you can do:
val x1 = EitherHelper.toEither[WrapperChildGood](1)
println(s"x1 = $x1")
val x2 = EitherHelper.toEither[WrapperChildGood](2)
println(s"x2 = $x2")
//val bad = EitherHelper.toEither[WrapperChildBad](3) // compilation error generated by c.abort
and it will print
x1 = Left(WrapperChildGood(left,1,0.0))
x2 = Right(right)
If you have many different places where you want to ensure different constructors exists, you'll need to make the macro much more complicated to generate constructor calls with arbitrary signatures passed from the outside.

Scala context bounds

Using context bounds in scala you can do stuff like
trait HasBuild[T] {
def build(buildable: T): Something
}
object Builders {
implict object IntBuilder extends HasBuild[Int] {
override def build(i: Int) = ??? // Construct a Something however appropriate
}
}
import Builders._
def foo[T: HasBuild](input: T): Something = implicitly[HasBuild[T]].build(1)
val somethingFormInt = foo(1)
Or simply
val somethingFromInt = implicitly[HasBuild[Int]].build(1)
How could I express the type of a Seq of any elements that have an appropriate implicit HasBuild object in scope? Is this possible without too much magic and external libraries?
Seq[WhatTypeGoesHere] - I should be able to find the appropriate HasBuild for each element
This obviously doesn't compile:
val buildables: Seq[_: HasBuild] = ???
Basically I'd like to be able to handle unrelated types in a common way (e.g.: build), without the user wrapping them in some kind of adapter manually - and enforce by the compiler, that the types actually can be handled. Not sure if the purpose is clear.
Something you can do:
case class HasHasBuild[A](value: A)(implicit val ev: HasBuild[A])
object HasHasBuild {
implicit def removeEvidence[A](x: HasHasBuild[A]): A = x.value
implicit def addEvidence[A: HasBuild](x: A): HasHasBuild[A] = HasHasBuild(x)
}
and now (assuming you add a HasBuild[String] for demonstration):
val buildables: Seq[HasHasBuild[_]] = Seq(1, "a")
compiles, but
val buildables1: Seq[HasHasBuild[_]] = Seq(1, "a", 1.0)
doesn't. You can use methods with implicit HasBuild parameters when you have only a HasHasBuild:
def foo1[A](x: HasHasBuild[A]) = {
import x.ev // now you have an implicit HasBuild[A] in scope
foo(x.value)
}
val somethings: Seq[Something] = buildables.map(foo1(_))
First things first, contrary to some of the comments, you are relying on context bounds. Requesting an implicit type class instance for a T is what you call a "context bound".
What you want is achievable, but not trivial and certainly not without other libraries.
import shapeless.ops.hlist.ToList
import shapeless._
import shapeless.poly_
object builder extends Poly1 {
implicit def caseGeneric[T : HasBuilder] = {
at[T](obj => implicitly[HasBuilder[T]].build(obj))
}
}
class Builder[L <: HList](mappings: L) {
def build[HL <: HList]()(
implicit fn: Mapper.Aux[builder.type, L, HL],
lister: ToList[Something]
) = lister(mappings map fn)
def and[T : HasBuilder](el: T) = new Builder[T :: L](el :: mappings)
}
object Builder {
def apply[T : HasBuilder](el: T) = new Builder(el :: HNil)
}
Now you might be able to do stuff like:
Builder(5).and("string").build()
This will call out the build methods from all the individual implcit type class instances and give you a list of the results, where every result has type Something. It relies on the fact that all the build methods have a lower upper bound of Something, e.g as per your example:
trait HasBuild[T] {
def build(buildable: T): Something
}

Scala: overloading methods based on provided type

Consider a simple object that serves as a storage for some cohesive data discriminated by type. I want it to have an API which is:
consistent and concise;
compile-time safe.
I can easily provide such API for saving objects by using overloading:
object CatsAndDogsStorage {
def save(key: String, cat: Cat): Future[Unit] = { /* write cat to db */ }
def save(key: String, dog: Dog): Future[Unit] = { /* save dog to Map */ }
/* other methods */
}
But I cannot find a good way to declare such methods for loading objects. Ideally, I would want something like this:
// Futures of two unrelated objects
val catFuture: Future[Cat] = CatsAndDogsStorage.load[Cat]("Lucky")
val dogFuture = CatsAndDogsStorage.load[Dog]("Lucky")
I'm fairly new to Scala, but I know that I have these options (sorted from the least preferred):
1. Different method names
def loadCat(key: String): Future[Cat] = { /* ... */ }
def loadDog(key: String): Future[Dog] = { /* ... */ }
Not the most concise method. I dislike how if I decide to rename Cat to something else, I would have to rename the method too.
2. Runtime check for provided class
def load[T: ClassTag](key: String): Future[T] = classTag[T] match {
case t if t == classOf[Dog] => /* ... */
case c if c == classOf[Cat] => /* ... */
}
This one gives the desired syntax, but it fails in runtime, not compile time.
3. Dummy implicits
def load[T <: Cat](key: String): Future[Cat] = /* ... */
def load[T <: Dog](key: String)(implicit i1: DummyImplicit): Future[Dog]
This code becomes nightmare when you have a handful of types you need to support. It also makes it quite inconvenient to remove those types
4. Sealed trait + runtime check
sealed trait Loadable
case class Cat() extends Loadable
case class Dog() extends Loadable
def load[T <: Loadable: ClassTag](key: String): Future[T] = classTag[T] match {
case t if t == classOf[Dog] => /* ... */
case c if c == classOf[Cat] => /* ... */
}
This has the advantage of 2) while preventing user from asking anything besides Dog or Cat. Still, I would rather not change the object hierarchy. I can use union types to make the code shorter.
So, the last solution is okay, but it still feels hack-ish, and maybe there is another known way which I just cannot figure out.
Having functions with sligthly different name doing similar work but for differents type doesn't seem bad for me.
If you really want to have a facade API dispatching according the type you can use typeclasses.
trait SaveFn[T] extends (T => Future[Unit]) {}
object SaveFn {
implicit object SaveDog extends SaveFn[Dog] { def apply(dog: Dog): Future[Unit] = ??? }
implicit object SaveCat extends SaveFn[Dog] { def apply(cat: Cat): Future[Unit] = ??? }
}
object Storage {
def save[T : SaveFn](in: T): Future[Unit] = implicitly[SaveFn[T]](in)
}
For the .load case:
trait LoadFn[T] extends (String => Future[T]) {}
object LoadFn {
implicit object LoadDog extends LoadFn[Dog] { def apply(key: String): Future[Dog] = ??? }
implicit object LoadCat extends LoadFn[Cat] { def apply(key: String): Future[Cat] = ??? }
}
object Storage {
def load[T : LoadFn](key: String): Future[T] = implicitly[LoadFn[T]](key)
}
As for .load the inference cannot be found according the arguments as for .save, that's less nice to use: Storage.load[Dog]("dogKey")

Is it possible to define a function return type based on a defined mapping from the type of a function argument?

Ideally I'd like to be able to do the following in Scala:
import Builders._
val myBuilder = builder[TypeToBuild] // Returns instance of TypeToBuildBuilder
val obj = myBuilder.methodOnTypeToBuildBuilder(...).build()
In principle the goal is simply to be able to 'map' TypeToBuild to TypeToBuildBuilder using external mapping definitions (i.e. assume no ability to change these classes) and leverage this in type inferencing.
I got the following working with AnyRef types:
import Builders._
val myBuilder = builder(TypeToBuild)
myBuilder.methodOnTypeToBuildBuilder(...).build()
object Builders {
implicit val typeToBuildBuilderFactory =
new BuilderFactory[TypeToBuild.type, TypeToBuildBuilder]
def builder[T, B](typ: T)(implicit ev: BuilderFactory[T, B]): B = ev.create
}
class BuilderFactory[T, B: ClassTag] {
def create: B = classTag[B].runtimeClass.newInstance().asInstanceOf[B]
}
Note that the type is passed as a function argument rather than a type argument.
I'd be supremely happy just to find out how to get the above working with Any types, rather than just AnyRef types. It seems this limitation comes since Singleton types are only supported for AnyRefs (i.e. my use of TypeToBuild.type).
That being said, an answer that solves the original 'ideal' scenario (using a type argument instead of a function argument) would be fantastic!
EDIT
A possible solution that requires classOf[_] (would really love not needing to use classOf!):
import Builders._
val myBuilder = builder(classOf[TypeToBuild])
myBuilder.methodOnTypeToBuildBuilder(...).build()
object Builders {
implicit val typeToBuildBuilderFactory =
new BuilderFactory[classOf[TypeToBuild], TypeToBuildBuilder]
def builder[T, B](typ: T)(implicit ev: BuilderFactory[T, B]): B = ev.create
}
class BuilderFactory[T, B: ClassTag] {
def create: B = classTag[B].runtimeClass.newInstance().asInstanceOf[B]
}
Being able to just use builder(TypeToBuild) is really just a win in elegance/brevity. Being able to use builder[TypeToBuild] would be cool as perhaps this could one day work (with type inference advancements in Scala):
val obj: TypeToBuild = builder.methodOnTypeToBuildBuilder(...).build();
Here is a complete, working example using classOf: http://ideone.com/94rat3
Yes, Scala supports return types based on the parameters types. An example of this would be methods in the collections API like map that use the CanBuildFrom typeclass to return the desired type.
I'm not sure what you are trying to do with your example code, but maybe you want something like:
trait Builder[-A, +B] {
def create(x: A): B
}
object Builders {
implicit val int2StringBuilder = new Builder[Int, String] {
def create(x: Int) = "a" * x
}
def buildFrom[A, B](x: A)(implicit ev: Builder[A, B]): B = ev.create(x)
}
import Builders._
buildFrom(5)
The magic with newInstance only works for concrete classes that have a constructor that takes no parameters, so it probably isn't generic enough to be useful.
If you're not afraid of implicit conversions, you could do something like this:
import scala.language.implicitConversions
trait BuilderMapping[TypeToBuild, BuilderType] {
def create: BuilderType
}
case class BuilderSpec[TypeToBuild]()
def builder[TypeToBuild] = BuilderSpec[TypeToBuild]
implicit def builderSpecToBuilder[TypeToBuild, BuilderType]
(spec: BuilderSpec[TypeToBuild])
(implicit ev: BuilderMapping[TypeToBuild, BuilderType]) = ev.create
case class Foo(count: Int)
case class FooBuilder() {
def translate(f: Foo) = "a" * f.count
}
implicit val FooToFooBuilder = new BuilderMapping[Foo, FooBuilder] {
def create = FooBuilder()
}
val b = builder[Foo]
println(b.translate(Foo(3)))
The implicit conversions aren't too bad, since they're constrained to these builder-oriented types. The conversion is needed to make b.translate valid.
It looked like wingedsubmariner's answer was most of what you wanted, but you didn't want to specify both TypeToBuild and BuilderType (and you didn't necessarily want to pass a value). To achieve that, we needed to break up that single generic signature into two parts, which is why the BuilderSpec type exists.
It might also be possible to use something like partial generic application (see the answers to a question that I asked earlier), though I can't put the pieces together in my head at the moment.
I'll resort to answering my own question since a Redditor ended up giving me the answer I was looking for and they appear to have chosen not to respond here.
trait Buildable[T] {
type Result
def newBuilder: Result
}
object Buildable {
implicit object ABuildable extends Buildable[A] {
type Result = ABuilder
override def newBuilder = new ABuilder
}
implicit object BBuildable extends Buildable[B] {
type Result = BBuilder
override def newBuilder = new BBuilder
}
}
def builder[T](implicit B: Buildable[T]): B.Result = B.newBuilder
class ABuilder {
def method1() = println("Call from ABuilder")
}
class BBuilder {
def method2() = println("Call from BBuilder")
}
Then you will get:
scala> builder[A].method1()
Call from ABuilder
scala> builder[B].method2()
Call from BBuilder
You can see the reddit post here: http://www.reddit.com/r/scala/comments/2542x8/is_it_possible_to_define_a_function_return_type/
And a full working version here: http://ideone.com/oPI7Az