I have a repository:
trait CustomerRepo[F[_]] {
def get(id: Identifiable[Customer]): F[Option[CustomerWithId]]
def get(): F[List[CustomerWithId]]
}
I have an implementation for my database which uses Cats IO, so I have a CustomerRepoPostgres[IO].
class CustomerRepoPostgres(xa: Transactor[IO]) extends CustomerRepo[IO] {
import doobie.implicits._
val makeId = IO {
Identifiable[Customer](UUID.randomUUID())
}
override def get(id: Identifiable[Customer]): IO[Option[CustomerWithId]] =
sql"select id, name, active from customer where id = $id"
.query[CustomerWithId].option.transact(xa)
override def get(): IO[List[CustomerWithId]] =
sql"select id, name, active from customer"
.query[CustomerWithId].to[List].transact(xa)
}
Now, I want to use a library which cannot deal with arbitrary holder types (it only supports Future). So I need a CustomerRepoPostgres[Future].
I thought to write some bridge code which can convert my CustomerRepoPostgres[IO] to CustomerRepoPostgres[Future]:
class RepoBridge[F[_]](repo: CustomerRepo[F])
(implicit convertList: F[List[CustomerWithId]] => Future[List[CustomerWithId]],
convertOption: F[Option[CustomerWithId]] => Future[Option[CustomerWithId]]) {
def get(id: Identifiable[Customer]): Future[Option[CustomerWithId]] = repo.get(id)
def get(): Future[List[CustomerWithId]] = repo.get()
}
I don't like that this approach requires implicit converters for every type used in the repository. Is there a better way to do this?
This is exactly what the tagless final approach is for, to abstract over F by requiring it to follow some specific type constraints. For example, let's create a custom implementation which requires F to be an Applicative:
trait CustomerRepo[F[_]] {
def get(id: Identifiable[Customer]): F[Option[CustomerWithId]]
def get(): F[List[CustomerWithId]]
}
class CustorRepoImpl[F[_]](implicit A: Applicative[F]) extends CustomerRepo[F] {
def get(id: Identifiable[Customer]): F[Option[CustomerWithId]] {
A.pure(???)
}
def get(): F[List[CustomerWithId]] = {
A.pure(???)
}
}
This way, no matter the concrete type of F, if it has an instance of Applicative[F] then you'll be good to go, with no need to define any transformers.
The way we do this is just put the relevant constraints on F according to the processing we need to do. If we need a sequential computation, we can use a Monad[F] and then flatMap the results. If no sequentiality is needed, Applicative[F] might be strong enough for this.
Related
I have algebra like this
object Algebra {
case class Product(id: String, description: String)
case class ShoppingCart(id: String, products: List[Product])
trait ShoppingCarts[F[_]] {
def create(id: String): F[Unit]
def get(id: String): F[ShoppingCart]
def find(id: String): F[Option[ShoppingCart]]
}
}
I came up with following implementation. But I wonder if it would be possible to implement it as generic method within the trait itself. I have tried to bound context to functor to gain access to map but this is not valid construct.
override def get(id: String): ScRepoState[ShoppingCart] =
find(id).flatMap {
case Some(sc) => sc.pure[ScRepoState]
case None => create(id) *> get(id)
}
Another problem is to implement addMany() metdhod. I got something like this
def addMany[F[_] : Monad](cartId: String, products: List[Product])(implicit shoppingCarts: ShoppingCarts[F]): F[ShoppingCart] = {
for {
cart <- shoppingCarts.get(cartId)
product <- products.pure[F]
newCart <- product.traverse(product => shoppingCarts.add(cart, product))
} yield newCart
}
I struggle how to mix differetnt wrappers within single for comprehension block
But I wonder if it would be possible to implement it as generic method within the trait itself.
Not quite. Scala 2 doesn't allow traits to have parameters, but you can use an abstract class instead. You can either not use trait entirely, or have a default class with all derivable implementations, e.g.:
abstract class DefaultShoppingCarts[F[_]: Monad] extends ShoppingCarts[F] {
override def get(id: String): F[ShoppingCart] =
find(id).flatMap {
case Some(sc) => sc.pure[F]
case None => create(id) >> get(id)
}
}
This is my preferred method, but there are other options for changing traits directly.
You can add a Monad parameter to a method:
trait ShoppingCarts[F[_]] {
def create(id: String): F[Unit]
def find(id: String): F[Option[ShoppingCart]]
def get(id: String)(implicit F: Monad[F]): F[ShoppingCart] =
find(id).flatMap {
case Some(sc) => sc.pure[F]
case None => create(id) >> get(id)
}
}
This is quite different from what we did in abstract class example, b/c the use site of ShoppingCarts will be forced to have a monad available instead of construction site, and the implementor, if they want to override the method, would have to replicate the signature exactly even if that Monad[F] is not used.
You can also emulate what trait parameters would do with abstract implicit defs:
trait ShoppingCarts[F[_]] {
implicit protected def F: Monad[F]
def create(id: String): F[Unit]
def get(id: String): F[ShoppingCart] =
find(id).flatMap {
case Some(sc) => sc.pure[F]
case None => create(id) >> get(id)
}
def find(id: String): F[Option[ShoppingCart]]
}
This works but you are more likely to run into technical issues with implicit scope when implementing the F member.
I struggle how to mix different wrappers within single for comprehension block
You don't. No mixing is allowed. Don't use for-comprehension for list, only use it for F. In some more complex cases you might want to nest for comprehensions, or use a monad transformer, but here you only need to work in F. I'm also not sure what's the return type of add, but assuming it's F[ShoppingCart]:
def addMany[F[_] : Monad](cartId: String, products: List[Product])(implicit shoppingCarts: ShoppingCarts[F]): F[ShoppingCart] = {
for {
cart <- shoppingCarts.get(cartId)
results <- products.traverse(product => shoppingCarts.add(cart, product))
// results is a list of intermediate carts, get the last one; fallback if list was empty
} yield results.lastOption.getOrElse(cart)
}
Also please ask second question separately next time.
I have a situation where I'd like to implement a given trait (CanBeString in the example below). I would like to have the option either to implement that trait using a newly created case class (NewImplementation in the example below), or to implement it by adding functionality to some pre-existing type (just Int in the example below), by using a type class. This is probably best illustrated by the below:
package example
// typeclass
trait ConvertsToString[A] {
def asString(value: A): String
}
// the trait I would like the typeclass to implement
trait CanBeString {
def asString: String
}
// this implementation approach taken from the scala with cats book
object ConvertsToStringInstances {
implicit val intConvertsToString: ConvertsToString[Int] =
new ConvertsToString[Int] {
def asString(value: Int): String = s"${value}"
}
}
object ConvertsToStringSyntax {
implicit class ConvertsToStringOps[A](value: A) {
def asString(implicit c: ConvertsToString[A]): String = c.asString(value)
}
}
object Test {
import ConvertsToStringInstances._
import ConvertsToStringSyntax._
def testAsFunc(c: CanBeString): String = c.asString
case class NewImplementation (f: Double) extends CanBeString {
def asString = s"{f}"
}
println(testAsFunc(NewImplementation(1.002))) // this works fine!
println(testAsFunc(1)) // this sadly does not.
}
Is anything like this possible? I'm only recently discovering the topic of typeclasses so I'm aware that what I'm asking for here may be possible but just unwise - if so please chime in and let me know what a better idiom might be.
Thanks in advance, and also afterwards!
For example you can have two overloaded versions of testAsFunc (OOP-style and typeclass-style)
object Test {
...
def testAsFunc(c: CanBeString): String = c.asString
def testAsFunc[C: ConvertsToString](c: C): String = c.asString
println(testAsFunc(NewImplementation(1.002))) // {f}
println(testAsFunc(1)) // 1
}
Or if you prefer to have the only testAsFunc then you can add instances of the type class for subtypes of the trait to be implemented
object ConvertsToStringInstances {
implicit val intConvertsToString: ConvertsToString[Int] = ...
implicit def canBeStringSubtypeConvertsToString[A <: CanBeString]: ConvertsToString[A] =
new ConvertsToString[A] {
override def asString(value: A): String = value.asString
}
}
object Test {
...
def testAsFunc[C: ConvertsToString](c: C): String = c.asString
println(testAsFunc(NewImplementation(1.002))) // {f}
println(testAsFunc(1)) // 1
}
Please notice that if for a c there are both OOP-ish c.asString and extension-method c.asString then only the first is actually called.
Hi I have class which does some json extraction and want to do a safe/unsafe version.
Currently I have a class definition like this
class Safe {
def getA: Option[String] = ...
def getB: Option[Int] = ...
... etc ...
}
And then a Unsafe version of that which just delegates to the Safe class:
class Unsafe(delegate: Safe) {
def getA: String = delegate.getA.get
def getB: Int = delegate.getB.get
... etc ...
}
this works but obviously the main problem is that the delegation is maintained by hand, and that if we ever change anything about the Safe interface, someone has to manually also make sure that is reflected in the Unsafe class as well.
Is there a more idomatic and less manual pattern in scala to do this?
Here is an implementation of my proposal.
Define Extractor interface parameterized by a type constructor:
trait Extractor[F[_]] { outer =>
def getA: F[String]
def getB: F[Int]
/* insert `transform` here */
}
Implement a transform method once and for all, which takes an arbitrary F ~> G:
def transform[G[_]](natTrafo: F ~> G): Extractor[G] =
new Extractor[G] {
def getA: G[String] = natTrafo[String](outer.getA)
def getB: G[Int] = natTrafo[Int](outer.getB)
}
Here, the F ~> G is a type of polymorphic functions that can transform any F[A] into G[A] for arbitrary type A (String, Int, or thousand other types you want to get in your extractor):
trait ~>[F[_], G[_]] {
def apply[A](fa: F[A]): G[A]
}
This interface is quite ubiquitous, it's available in Scalaz and Cats (it's called FunctionK there), sometimes called "natural transformation".
Implement SafeExtractor:
class SafeExtractor extends Extractor[Option] {
def getA: Option[String] = None /* do sth. more clever here? */
def getB: Option[Int] = None
}
Get UnsafeExtractor for free by providing a simple Option ~> Id implementation to transform:
type Id[X] = X
val safe: Extractor[Option] = new SafeExtractor()
val unsafe: Extractor[Id] = safe.transform(
new ~>[Option, Id] {
def apply[A](x: Option[A]): Id[A] = x.get
}
)
You can now also easily reuse the same transform function to convert Extractor[Future] to Extractor[Id] by Awaiting results, or Extractor[Id] to Extractor[Try] by catching all errors etc.
Full code
import scala.language.higherKinds
trait Extractor[F[_]] { outer =>
def getA: F[String]
def getB: F[Int]
def transform[G[_]](natTrafo: F ~> G): Extractor[G] =
new Extractor[G] {
def getA: G[String] = natTrafo[String](outer.getA)
def getB: G[Int] = natTrafo[Int](outer.getB)
}
}
/** A polymorphic function that can transform any
* `F[A]` into a `G[A]` for all possible `A`.
*/
trait ~>[F[_], G[_]] {
def apply[A](fa: F[A]): G[A]
}
class SafeExtractor extends Extractor[Option] {
def getA: Option[String] = None
def getB: Option[Int] = None
}
type Id[X] = X
val safe: Extractor[Option] = new SafeExtractor()
val unsafe: Extractor[Id] = safe.transform(
new ~>[Option, Id] {
def apply[A](x: Option[A]): Id[A] = x.get
}
)
The answer by Andrey Tyukin is a good way to address this specific problem, but there is perhaps a bigger design issue.
Your code assumes that the structure of the JSON (as defined in Safe) matches the structure of the data in your code (as defined by Unsafe). This causes problems when you want to change the structure of the data in your code, or when the JSON format changes, or when you want to import data from a different source (e.g. XML), or when you want to do more complex validation of the data.
So the correct way to handle this is to design the application data structures (your Unsafe class) in the way that fits your application. You then provide a JSON-reading library that converts incoming data to that format. This library uses your Safe class internally. This can perform any validation/data conditioning that may be required, and can adjust to changes in the JSON format without affecting the rest of the system. Your unit testing framework will make sure that your two classes stay in sync.
This design pattern is an example of separation of concerns
Here is some sample code:
In the main application:
// Data structure for use by application
case class Unsafe(a: String, b: Int)
// Abstract interface for loading data
trait Loader {
def load(): Unsafe
}
In the data loading library:
// JSON implementation of loading interface
object JsonLoader extends Loader {
protected case class Safe(
a: Option[String],
b: Option[Int]
)
def load(): Unsafe = {
val json = rawLibrary.readData() // Load data in JSON format
val safe: Safe = jsonLibrary.extract[Safe](json)
// Validate/condition the raw data here
val a = safe.a.getOrElse("")
val b = safe.b.getOrElse(0)
// Return the application data
Unsafe(a, b)
}
}
The mapping from JSON to application data is hidden inside the JsonLoader object. This makes it easier to make sure that they are in sync allows the JSON format to change without affecting the wider code.
I'm using two Scala libraries that both rely on implicit parameters to supply codecs/marshallers for case classes (the libraries in question are msgpack4s and op-rabbit). A simplified example follows:
sealed abstract trait Event
case class SomeEvent(msg: String) extends Event
case class OtherEvent(code: String) extends Event
// Assume library1 needs Show and library2 needs Printer
trait Show[A] { def show(a: A): String }
trait Printer[A] { def printIt(a: A): Unit }
object ShowInstances {
implicit val showSomeEvent = new Show[SomeEvent] {
override def show(a: SomeEvent) =
s"SomeEvent: ${a.msg}"
}
implicit val showOtherEvent = new Show[OtherEvent] {
override def show(a: OtherEvent) =
s"OtherEvent: ${a.code}"
}
}
The Printer for the one library can be generic provided there's an implicit Show for the other library available:
object PrinterInstances {
implicit def somePrinter[A: Show]: Printer[A] = new Printer[A] {
override def printIt(a: A): Unit =
println(implicitly[Show[A]].show(a))
}
}
I want to provide an API that abstracts over the details of the underlying libraries - callers should only need to pass the case class, internally to the API implementation the relevant implicits should be summoned.
object EventHandler {
private def printEvent[A <: Event](a: A)(implicit printer: Printer[A]): Unit = {
print("Handling event: ")
printer.printIt(a)
}
def handle(a: Event): Unit = {
import ShowInstances._
import PrinterInstances._
// I'd like to do this:
//EventHandler.printEvent(a)
// but I have to do this
a match {
case s: SomeEvent => EventHandler.printEvent(s)
case o: OtherEvent => EventHandler.printEvent(o)
}
}
}
The comments in EventHandler.handle() method indicate my issue - is there a way to have the compiler select the right implicits for me?.
I suspect the answer is no because at compile time the compiler doesn't know which subclass of Event handle() will receive, but I wanted to see if there's another way. In my actual code, I control & can change the PrinterInstances code, but I can't change the signature of the printEvent method (that's provided by one of the libraries)
*EDIT: I think this is the same as Provide implicits for all subtypes of sealed type. The answer there is nearly 2 years old, I'm wondering if it's still the best approach?
You have to do the pattern matching somewhere. Do it in the Show instance:
implicit val showEvent = new Show[Event] {
def show(a: Event) = a match {
case SomeEvent(msg) => s"SomeEvent: $msg"
case OtherEvent(code) => s"OtherEvent: $code"
}
}
If you absolutely need individual instances for SomeEvent and OtherEvent, you can provide them in a different object so they can be imported separately.
If Show is defined to be contravariant (i.e. as trait Show[-A] { ... }, with a minus on the generic type) then everything works out of the box and a Show[Event] is usable as a Show[SomeEvent] (and as a Show[OtherEvent] for that matter).
If Show is unfortunately not written to be contravariant, then we might have to do a little bit more juggling on our end than we'd like. One thing we can do is declare all of our SomeEvent values as simply Events, vis a vis val fooEvent: Event = SomeEvent("foo"). Then fooEvent will be showable.
In a more extreme version of the above trick, we can actually hide our inheritance hierarchy:
sealed trait Event {
def fold[X]( withSomeEvent: String => X,
withOtherEvent: String => X ): X
}
object Event {
private case class SomeEvent(msg: String) extends Event {
def fold[X]( withSomeEvent: String => X,
withOtherEvent: String => X ): X = withSomeEvent(msg)
}
private case class OtherEvent(code: String) extends Event {
def fold[X]( withSomeEvent: String => X,
withOtherEvent: String => X ): X = withOtherEvent(code)
}
def someEvent(msg: String): Event = SomeEvent(msg)
def otherEvent(code: String): Event = OtherEvent(code)
}
Event.someEvent and Event.otherEvent allow us to construct values, and fold allows us to pattern match.
Consider a simple object that serves as a storage for some cohesive data discriminated by type. I want it to have an API which is:
consistent and concise;
compile-time safe.
I can easily provide such API for saving objects by using overloading:
object CatsAndDogsStorage {
def save(key: String, cat: Cat): Future[Unit] = { /* write cat to db */ }
def save(key: String, dog: Dog): Future[Unit] = { /* save dog to Map */ }
/* other methods */
}
But I cannot find a good way to declare such methods for loading objects. Ideally, I would want something like this:
// Futures of two unrelated objects
val catFuture: Future[Cat] = CatsAndDogsStorage.load[Cat]("Lucky")
val dogFuture = CatsAndDogsStorage.load[Dog]("Lucky")
I'm fairly new to Scala, but I know that I have these options (sorted from the least preferred):
1. Different method names
def loadCat(key: String): Future[Cat] = { /* ... */ }
def loadDog(key: String): Future[Dog] = { /* ... */ }
Not the most concise method. I dislike how if I decide to rename Cat to something else, I would have to rename the method too.
2. Runtime check for provided class
def load[T: ClassTag](key: String): Future[T] = classTag[T] match {
case t if t == classOf[Dog] => /* ... */
case c if c == classOf[Cat] => /* ... */
}
This one gives the desired syntax, but it fails in runtime, not compile time.
3. Dummy implicits
def load[T <: Cat](key: String): Future[Cat] = /* ... */
def load[T <: Dog](key: String)(implicit i1: DummyImplicit): Future[Dog]
This code becomes nightmare when you have a handful of types you need to support. It also makes it quite inconvenient to remove those types
4. Sealed trait + runtime check
sealed trait Loadable
case class Cat() extends Loadable
case class Dog() extends Loadable
def load[T <: Loadable: ClassTag](key: String): Future[T] = classTag[T] match {
case t if t == classOf[Dog] => /* ... */
case c if c == classOf[Cat] => /* ... */
}
This has the advantage of 2) while preventing user from asking anything besides Dog or Cat. Still, I would rather not change the object hierarchy. I can use union types to make the code shorter.
So, the last solution is okay, but it still feels hack-ish, and maybe there is another known way which I just cannot figure out.
Having functions with sligthly different name doing similar work but for differents type doesn't seem bad for me.
If you really want to have a facade API dispatching according the type you can use typeclasses.
trait SaveFn[T] extends (T => Future[Unit]) {}
object SaveFn {
implicit object SaveDog extends SaveFn[Dog] { def apply(dog: Dog): Future[Unit] = ??? }
implicit object SaveCat extends SaveFn[Dog] { def apply(cat: Cat): Future[Unit] = ??? }
}
object Storage {
def save[T : SaveFn](in: T): Future[Unit] = implicitly[SaveFn[T]](in)
}
For the .load case:
trait LoadFn[T] extends (String => Future[T]) {}
object LoadFn {
implicit object LoadDog extends LoadFn[Dog] { def apply(key: String): Future[Dog] = ??? }
implicit object LoadCat extends LoadFn[Cat] { def apply(key: String): Future[Cat] = ??? }
}
object Storage {
def load[T : LoadFn](key: String): Future[T] = implicitly[LoadFn[T]](key)
}
As for .load the inference cannot be found according the arguments as for .save, that's less nice to use: Storage.load[Dog]("dogKey")