Loading a ficus configuration like
loadConfiguration[T <: Product](): T = {
import net.ceedubs.ficus.readers.ArbitraryTypeReader._
import net.ceedubs.ficus.Ficus._
val config: Config = ConfigFactory.load()
config.as[T]
fails with:
Cannot generate a config value reader for type T, because it has no apply method in a companion object that returns type T, and it doesn't have a primary constructor
when instead directly specifying a case class instead of T i.e. SomeClass it works just fine. What am I missing here?
Ficus uses the type class pattern, which allows you to constrain generic types by specifying operations that must be available for them. Ficus also provides type class instance "derivation", which in this case is powered by a macro that can inspect the structure of a specific case class-like type and automatically create a type class instance.
The problem in this case is that T isn't a specific case class-like type—it's any old type that extends Product, which could be something nice like this:
case class EasyToDecode(a: String, b: String, c: String)
But it could also be:
trait X extends Product {
val youWillNeverDecodeMe: String
}
The macro you've imported from ArbitraryTypeReader has no idea at this point, since T is generic here. So you'll need a different approach.
The relevant type class here is ValueReader, and you could minimally change your code to something like the following to make sure T has a ValueReader instance (note that the T: ValueReader syntax here is what's called a "context bound"):
import net.ceedubs.ficus.Ficus._
import net.ceedubs.ficus.readers.ValueReader
import com.typesafe.config.{ Config, ConfigFactory }
def loadConfiguration[T: ValueReader]: T = {
val config: Config = ConfigFactory.load()
config.as[T]
}
This specifies that T must have a ValueReader instance (which allows us to use .as[T]) but says nothing else about T, or about where its ValueReader instance needs to come from.
The person calling this method with a concrete type MyType then has several options. Ficus provides instances that are automatically available everywhere for many standard library types, so if MyType is e.g. Int, they're all set:
scala> ValueReader[Int]
res0: net.ceedubs.ficus.readers.ValueReader[Int] = net.ceedubs.ficus.readers.AnyValReaders$$anon$2#6fb00268
If MyType is a custom type, then either they can manually define their own ValueReader[MyType] instance, or they can import one that someone else has defined, or they can use generic derivation (which is what ArbitraryTypeReader does).
The key point here is that the type class pattern allows you as the author of a generic method to specify the operations you need, without saying anything about how those operations will be defined for a concrete type. You just write T: ValueReader, and your caller imports ArbitraryTypeReader as needed.
Related
I'm toying with Scala for the first time so bear with me. Also using tapir to declare an API, where I'm having issues providing a Schema for an enum.
I have a bunch of enums defined that are part of my domain model and that extend Scala's Enumeration. For instance, this is one of them:
object Status extends Enumeration with JsonEnumeration {
val Active, Completed, Archived, Deleted = Value
}
And also have many case classes that uses them. For instance, Order uses our previously defined enumeration, like:
case class Order(
id: String,
name: Option[String],
status: Status.Value,
)
I want to make this enum implement a trait that adds an implicit, but without modifying the original Status enumeration (I don't want to couple the Status enum -and all the others- to this trait).
The trait looks like:
import sttp.tapir.{Schema, Validator}
trait TapirEnumeration { e: Enumeration =>
implicit def schemaForEnum: Schema[e.Value] =
Schema.string.validate(Validator.enumeration(e.values.toList, v => Option(v)))
}
I wanted to somehow modify the Order object so the Status enum is now a TapirStatus enum (or something like that) which extends both the original Status and TapirEnumeration, but I don't think that can be doable, given that Status is originally defined as a companion object.
Ideally, all the enums I want to expose as responses from my API will implement that TapirEnumeration trait while still extending what they already extend.
What can I do to achieve this? Of course, creating a new enum that implements the trait isn't DRY so it's not an option.
Why does implicit need to be defined in the enum itself in the first place? Just make it its own definition.
import scala.language.implicitConversions
object EnumImplicits {
implicit def schema[E <: Enumeration](e: E): Schema[e.Value] = ???
}
Then, wherever you need access to that implicit you just make it available with import EnumImplicits._
Here is an example
The git repo that contains the issue can be found here https://github.com/mdedetrich/scalacache-example
The problem that I currently have is that I am trying to make my ScalaCache backend agnostic with it being configurable at runtime using typesafe config.
The issue I have is that ScalaCache parameterizes the constructors of the cache, i.e. to construct a Caffeine cache you would do
ScalaCache(CaffeineCache())
where as for a SentinelRedisCache you would do
ScalaCache(SentinelRedisCache("", Set.empty, ""))
In my case, I have created a generic cache wrapper called MyCache as shown below
import scalacache.ScalaCache
import scalacache.serialization.Codec
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr])(
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
We need to carry the CacheRepr along because this is how ScalaCache knows how to serialize any type T. CaffeineCache uses a CacheRepr which is InMemoryRepr where as SentinelRedisCache uses a CacheRepr which is Array[Byte].
And this is where the crux of the problem is, I have an Config which just stores which cache is being used, i.e.
import scalacache.Cache
import scalacache.caffeine.CaffeineCache
import scalacache.redis.SentinelRedisCache
final case class ApplicationConfig(cache: Cache[_])
The reason why its a Cache[_] is because at compile time we don't know what cache is being used, ApplicationConfig will be instantiated at runtime with either CaffeineCache/SentinelRedisCache.
And this is where the crux of the problem is, Scala is unable to find an implicit Codec for the wildcard type if we just us applicationConfig.cache as a constructor, i.e. https://github.com/mdedetrich/scalacache-example/blob/master/src/main/scala/Main.scala#L17
If we uncomment the above line, we get
[error] /Users/mdedetrich/github/scalacache-example/src/main/scala/Main.scala:17:37: Could not find any Codecs for type Int and _$1. Please provide one or import scalacache._
[error] Error occurred in an application involving default arguments.
[error] val myCache3: MyCache[_] = MyCache(ScalaCache(applicationConfig.cache)) // This doesn't
Does anyone know how to solve this problem, essentially I want to specify that in my ApplicationConfig, cache is of type Cache[InMemoryRepr | Array[Byte]] rather than just Cache[_] (so that the Scala compiler knows to look up implicits of either InMemoryRepr or Array[Byte] and for MyCache to be defined something like this
final case class MyCache[CacheRepr <: InMemoryRepr | Array[Byte]](scalaCache: ScalaCache[CacheRepr])
You seem to be asking for the compiler to resolve implicit values based on the run-time selection of the cache type. This is not possible because the compiler is no longer running by the time the application code starts.
You have to make the type resolution happen at compile time, not run time. So you need to define a trait the represents the abstract interface to the cache and provide a factory function that returns a specific instance based on the setting in ApplicationConfig. It might look something like this (untested):
sealed trait MyScalaCache {
def putInt(value: Int)
}
object MyScalaCache {
def apply(): MyScalaCache =
if (ApplicationConfig.useCaffine) {
MyCache(ScalaCache(CaffeineCache())
} else {
MyCache(ScalaCache(SentinelRedisCache("", Set.empty, ""))
}
}
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr]) extends MyScalaCache (
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
The compiler will resolve the implicit in MyCache at compile time where the two concrete instances are specified in apply.
I have a simple type hierarchy like the following:
sealed abstract class Config
object Config {
case class Valid(name: String, traits: List[String]) extends Config
case class Invalid(error: String) extends Config
}
implicit val validFormat = jsonFormatFor(Config.Valid)
implicit val invalidFormat = jsonFormatFor(Config.Invalid)
I also have client code that does the following:
newHttpServer().addHandler("/config", extractConfig)
The extractConfig method performs some computations and returns either a Config.Valid or a Config.Invalid, which the server will automatically convert to json by using the implicit json format objects. My problem is that there is a compiler error because extractConfig returns a Config:
type mismatch; found : Config
required: spray.httpx.marshalling.ToResponseMarshallable
If I change the return type of extractConfig to Config.Valid then the server code compiles because jsonFormatFor(...) supplies the necessary automatic type conversion to make the respose a ToResponseMarshaller (though I admit I don't fully understand this automatic conversion as well, being somewhat new to scala). Is there a simple way to solve this by declaring that any subclass of Config must be a ToResponseMarshaller, given that ToResponseMarshaller is a trait that seems to be supplied via implicit conversions?
If you only have Config.Valid and Config.Invalid it should be sufficient that extractConfig returns an Either[Config.Valid, Config.Invalid]. Then your formats above should work.
Another possibility is to write your own jsonwriter (see this thread from the mailing list).
I want to call a constructor of a generic type T, but I also want it to have a specific constructor with only one Int argument:
class Class1[T] {
def method1(i: Int) = {
val instance = new T(i) //ops!
i
}
}
How do I specify this requirement?
UPDATE:
How acceptable (flexible, etc) is it to use something like this? That's a template method pattern.
abstract class Class1[T] {
def creator: Int => T
def method1(i: Int) = {
val instance = creator(i) //seems ok
i
}
}
Scala doesn't allow you to specify the constructor's signature in a type constraint (as e.g. C#).
However Scala does allow you to achieve something equivalent by using the type class pattern. This is more flexible, but requires writing a bit more boilerplate code.
First, define a trait which will be an interface for creating a T given an Int.
trait Factory[T] {
def fromInt(i: Int): T
}
Then, define an implicit instance for any type you want. Let's say you have some class Foo with an appropriate constructor.
implicit val FooFactory = new Factory[Foo] {
def fromInt(i: Int) = new Foo(i)
}
Now, you can specify a context bound for the type parameter T in the signature of Class1:
class Class1[T : Factory] {
def method1(i: Int) = {
val instance = implicitly[Factory[T]].fromInt(i)
// ...
}
}
The constraint T : Factory says that there must be an implicit Factory[T] in scope. When you need to use the instance, you grab it from implicit scope using the implicitly method.
Alternatively, you could specify the factory as an implicit parameter to the method that requires it.
class Class1[T] {
def method1(i: Int)(implicit factory: Factory[T]) = {
val instance = factory.fromInt(i)
// ...
}
}
This is more flexible than putting the constraint in the class signature, because it means you could have other methods on Class1 that don't require a Factory[T]. In that case, the compiler will not enforce that there is a Factory[T] unless you call one of the methods that requires it.
In response to your update (with the abstract creator method), this is a perfectly reasonable way to do it, as long as you don't mind creating a subtype of Class1 for every T. Also note that T will need to be a concrete type at any point that you want to create an instance of Class1, because you will need to provide a concrete implementation for the abstract method.
Consider trying to create an instance of Class1 inside another generic method. When using the type class pattern, you can extend the necessary type constraint to the type signature of that method, in order to make this compile:
def instantiateClass1[T : Factory] = new Class1[T]
If you don't need to do this, then you might not need the full power of the type class pattern.
When you create a generic class or trait, the class does not gain special access to the methods of whatever actual class you might parameterise it with. When you say
class Class1[T]
You are saying
This is a class which will work with unspecified type T.
Most of its methods will take instances of type T as a parameter or return T.
Any variance annotations or type bounds attached to the type parameter will be applied whenever it appears as a parameter of one of Class1's methods.
There is no such thing as type "Class1" but there may be an arbitrary number of derived classes of type "Class1[something]"
That's all. You get no special access to T from within Class1, because Scala does not know what T is. If you wanted Class1 to have access to T's fields and methods, you should have extended it or mixed it in.
If you want access to the methods of T (without using reflection), you can only do that from within one of Class1's methods which accepts a parameter of type T. And then you will get whichever version of the method belongs to the specific type of the actual object which is passed.
(You can work around this with reflection, but that is a runtime solution and absolutely not typesafe).
Look at what you are trying to do in your original code snippet...
You have specified that Class1 can be parameterised with any arbitrary type.
You want to invoke T with a constructor which takes a single Int parameter
But what have you done to promise the Scala compiler that T will have such a constructor? Nothing at all. So how can the compiler trust this? Well, it can't.
Even if you added an upper type bound, requiring that T be a subclass of some class which does have such a constructor, that doesn't help; T might be a subclass which has a more complex constructor, which calls back to the simpler constructor. So at the point where Class1 is defined, the compiler can have no confidence about the safety of constructing T with that simple method. So that call cannot be type-safe.
Class-based OO isn't about conjuring unknown types out of the ether; it doesn't let you plunge your hand into a top-hat-shaped class loader and pull out a surprise. It allows you to handle arbitrary already-created instances of some general type without knowing their specific type. At the point where those objects are created, there's no ambiguity at all.
There seems to be a lot of enthusiasm among Scala bloggers lately for the type classes pattern, in which a simple class has functionality added to it by an additional class conforming to some trait or pattern. As a vastly oversimplified example, the simple class:
case class Wotsit (value: Int)
can be adapted to the Foo trait:
trait Foo[T] {
def write (t: T): Unit
}
with the help of this type class:
implicit object WotsitIsFoo extends Foo[Wotsit] {
def write (wotsit: Wotsit) = println(wotsit.value)
}
The type class is typically captured at compile time with implicts, allowing both the Wotsit and its type class to be passed together into a higher order function:
def writeAll[T] (items: List[T])(implicit tc: Foo[T]) =
items.foreach(w => tc.write(w))
writeAll(wotsits)
(before you correct me, I said it was an oversimplified example)
However, the use of implicits assumes that the precise type of the items is known at compile time. I find in my code this often isn't the case: I will have a list of some type of item List[T], and need to discover the correct type class to work on them.
The suggested approach of Scala would appear to be to add the typeclass argument at all points in the call hierarchy. This can get annoying as an the code scales and these dependencies need to be passed down increasingly long chains, through methods to which they are increasingly irrelevant. This makes the code cluttered and harder to maintain, the opposite of what Scala is for.
Typically this is where dependency injection would step in, using a library to supply the desired object at the point it's needed. Details vary with the library chosen for DI - I've written my own in Java in the past - but typically the point of injection needs to define precisely the object desired.
Trouble is, in the case of a type class the precise value isn't known at compile time. It must be selected based on a polymorphic description. And crucially, the type information has been erased by the compiler. Manifests are Scala's solution to type erasure, but it's far from clear to me how to use them to address this issue.
What techniques and dependency injection libraries for Scala would people suggest as a way of tackling this? Am I missing a trick? The perfect DI library? Or is this really the sticking point it seems?
Clarification
I think there are really two aspects to this. In the first case, the point where the type class is needed is reached by direct function calls from the point where the exact type of its operand is known, and so sufficient type wrangling and syntactic sugar can allow the type class to be passed to the point it's needed.
In the second case, the two points are separated by a barrier - such as an API that can't be altered, or being stored in a database or object store, or serialised and send to another computer - that means the type class can't be passed along with its operand. In this case, given an object whose type and value are known only at runtime, the type class needs somehow to be discovered.
I think functional programmers have a habit of assuming the first case - that with a sufficiently advanced language, the type of the operand will always be knowable. David and mkniessl provided good answers for this, and I certainly don't want to criticise those. But the second case definitely does exist, and that's why I brought dependency injection into the question.
A fair amount of the tediousness of passing down those implicit dependencies can be alleviated by using the new context bound syntax. Your example becomes
def writeAll[T:Foo] (items: List[T]) =
items.foreach(w => implicitly[Foo[T]].write(w))
which compiles identically but makes for nice and clear signatures and has fewer "noise" variables floating around.
Not a great answer, but the alternatives probably involve reflection, and I don't know of any library that will just make this automatically work.
(I have substituted the names in the question, they did not help me think about the problem)
I'll attack the problem in two steps. First I show how nested scopes avoid having to declare the type class parameter all the way down its usage. Then I'll show a variant, where the type class instance is "dependency injected".
Type class instance as class parameter
To avoid having to declare the type class instance as implicit parameter in all intermediate calls, you can declare the type class instance in a class defining a scope where the specific type class instance should be available. I'm using the shortcut syntax ("context bound") for the definition of the class parameter.
object TypeClassDI1 {
// The type class
trait ATypeClass[T] {
def typeClassMethod(t: T): Unit
}
// Some data type
case class Something (value: Int)
// The type class instance as implicit
implicit object SomethingInstance extends ATypeClass[Something] {
def typeClassMethod(s: Something): Unit =
println("SomthingInstance " + s.value)
}
// A method directly using the type class
def writeAll[T:ATypeClass](items: List[T]) =
items.foreach(w => implicitly[ATypeClass[T]].typeClassMethod(w))
// A class defining a scope with a type class instance known to be available
class ATypeClassUser[T:ATypeClass] {
// bar only indirectly uses the type class via writeAll
// and does not declare an implicit parameter for it.
def bar(items: List[T]) {
// (here the evidence class parameter defined
// with the context bound is used for writeAll)
writeAll(items)
}
}
def main(args: Array[String]) {
val aTypeClassUser = new ATypeClassUser[Something]
aTypeClassUser.bar(List(Something(42), Something(4711)))
}
}
Type class instance as writable field (setter injection)
A variant of the above which would be usable using setter injection. This time the type class instance is passed via a setter call to the bean using the type class.
object TypeClassDI2 {
// The type class
trait ATypeClass[T] {
def typeClassMethod(t: T): Unit
}
// Some data type
case class Something (value: Int)
// The type class instance (not implicit here)
object SomethingInstance extends ATypeClass[Something] {
def typeClassMethod(s: Something): Unit =
println("SomthingInstance " + s.value)
}
// A method directly using the type class
def writeAll[T:ATypeClass](items: List[T]) =
items.foreach(w => implicitly[ATypeClass[T]].typeClassMethod(w))
// A "service bean" class defining a scope with a type class instance.
// Setter based injection style for simplicity.
class ATypeClassBean[T] {
implicit var aTypeClassInstance: ATypeClass[T] = _
// bar only indirectly uses the type class via writeAll
// and does not declare an implicit parameter for it.
def bar(items: List[T]) {
// (here the implicit var is used for writeAll)
writeAll(items)
}
}
def main(args: Array[String]) {
val aTypeClassBean = new ATypeClassBean[Something]()
// "inject" the type class instance
aTypeClassBean.aTypeClassInstance = SomethingInstance
aTypeClassBean.bar(List(Something(42), Something(4711)))
}
}
Note that the second solution has the common flaw of setter based injection that you can forget to set the dependency and get a nice NullPointerException upon use...
The argument against type classes as dependency injection here is that with type classes the "precise type of the items is known at compile time" whereas with dependency injection, they are not. You might be interested in this Scala project rewrite effort where I moved from the cake pattern to type classes for dependency injection. Take a look at this file where the implicit declarations are made. Notice how the use of environment variables determines the precise type? That is how you can reconcile the compile time requirements of type classes with the run time needs of dependency injection.