The git repo that contains the issue can be found here https://github.com/mdedetrich/scalacache-example
The problem that I currently have is that I am trying to make my ScalaCache backend agnostic with it being configurable at runtime using typesafe config.
The issue I have is that ScalaCache parameterizes the constructors of the cache, i.e. to construct a Caffeine cache you would do
ScalaCache(CaffeineCache())
where as for a SentinelRedisCache you would do
ScalaCache(SentinelRedisCache("", Set.empty, ""))
In my case, I have created a generic cache wrapper called MyCache as shown below
import scalacache.ScalaCache
import scalacache.serialization.Codec
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr])(
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
We need to carry the CacheRepr along because this is how ScalaCache knows how to serialize any type T. CaffeineCache uses a CacheRepr which is InMemoryRepr where as SentinelRedisCache uses a CacheRepr which is Array[Byte].
And this is where the crux of the problem is, I have an Config which just stores which cache is being used, i.e.
import scalacache.Cache
import scalacache.caffeine.CaffeineCache
import scalacache.redis.SentinelRedisCache
final case class ApplicationConfig(cache: Cache[_])
The reason why its a Cache[_] is because at compile time we don't know what cache is being used, ApplicationConfig will be instantiated at runtime with either CaffeineCache/SentinelRedisCache.
And this is where the crux of the problem is, Scala is unable to find an implicit Codec for the wildcard type if we just us applicationConfig.cache as a constructor, i.e. https://github.com/mdedetrich/scalacache-example/blob/master/src/main/scala/Main.scala#L17
If we uncomment the above line, we get
[error] /Users/mdedetrich/github/scalacache-example/src/main/scala/Main.scala:17:37: Could not find any Codecs for type Int and _$1. Please provide one or import scalacache._
[error] Error occurred in an application involving default arguments.
[error] val myCache3: MyCache[_] = MyCache(ScalaCache(applicationConfig.cache)) // This doesn't
Does anyone know how to solve this problem, essentially I want to specify that in my ApplicationConfig, cache is of type Cache[InMemoryRepr | Array[Byte]] rather than just Cache[_] (so that the Scala compiler knows to look up implicits of either InMemoryRepr or Array[Byte] and for MyCache to be defined something like this
final case class MyCache[CacheRepr <: InMemoryRepr | Array[Byte]](scalaCache: ScalaCache[CacheRepr])
You seem to be asking for the compiler to resolve implicit values based on the run-time selection of the cache type. This is not possible because the compiler is no longer running by the time the application code starts.
You have to make the type resolution happen at compile time, not run time. So you need to define a trait the represents the abstract interface to the cache and provide a factory function that returns a specific instance based on the setting in ApplicationConfig. It might look something like this (untested):
sealed trait MyScalaCache {
def putInt(value: Int)
}
object MyScalaCache {
def apply(): MyScalaCache =
if (ApplicationConfig.useCaffine) {
MyCache(ScalaCache(CaffeineCache())
} else {
MyCache(ScalaCache(SentinelRedisCache("", Set.empty, ""))
}
}
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr]) extends MyScalaCache (
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
The compiler will resolve the implicit in MyCache at compile time where the two concrete instances are specified in apply.
Related
Loading a ficus configuration like
loadConfiguration[T <: Product](): T = {
import net.ceedubs.ficus.readers.ArbitraryTypeReader._
import net.ceedubs.ficus.Ficus._
val config: Config = ConfigFactory.load()
config.as[T]
fails with:
Cannot generate a config value reader for type T, because it has no apply method in a companion object that returns type T, and it doesn't have a primary constructor
when instead directly specifying a case class instead of T i.e. SomeClass it works just fine. What am I missing here?
Ficus uses the type class pattern, which allows you to constrain generic types by specifying operations that must be available for them. Ficus also provides type class instance "derivation", which in this case is powered by a macro that can inspect the structure of a specific case class-like type and automatically create a type class instance.
The problem in this case is that T isn't a specific case class-like type—it's any old type that extends Product, which could be something nice like this:
case class EasyToDecode(a: String, b: String, c: String)
But it could also be:
trait X extends Product {
val youWillNeverDecodeMe: String
}
The macro you've imported from ArbitraryTypeReader has no idea at this point, since T is generic here. So you'll need a different approach.
The relevant type class here is ValueReader, and you could minimally change your code to something like the following to make sure T has a ValueReader instance (note that the T: ValueReader syntax here is what's called a "context bound"):
import net.ceedubs.ficus.Ficus._
import net.ceedubs.ficus.readers.ValueReader
import com.typesafe.config.{ Config, ConfigFactory }
def loadConfiguration[T: ValueReader]: T = {
val config: Config = ConfigFactory.load()
config.as[T]
}
This specifies that T must have a ValueReader instance (which allows us to use .as[T]) but says nothing else about T, or about where its ValueReader instance needs to come from.
The person calling this method with a concrete type MyType then has several options. Ficus provides instances that are automatically available everywhere for many standard library types, so if MyType is e.g. Int, they're all set:
scala> ValueReader[Int]
res0: net.ceedubs.ficus.readers.ValueReader[Int] = net.ceedubs.ficus.readers.AnyValReaders$$anon$2#6fb00268
If MyType is a custom type, then either they can manually define their own ValueReader[MyType] instance, or they can import one that someone else has defined, or they can use generic derivation (which is what ArbitraryTypeReader does).
The key point here is that the type class pattern allows you as the author of a generic method to specify the operations you need, without saying anything about how those operations will be defined for a concrete type. You just write T: ValueReader, and your caller imports ArbitraryTypeReader as needed.
In Java, while type arguments are erased in runtime, it is possible to find the actual type arguments passed to a superclass:
class Derived extends Base<String> {
// ...
}
ParameterizedType type = (ParameterizedType)Derived.class.getGenericSuperclass();
Type[] args = type.getActualTypeArguments(); // gives {String.class}
While I can use the same Java reflection to Scala class, It does not catch Scala's value types:
class Base[T]
class Derived extends Base[Int]
classOf[Derived]
.getGenericSuperclass
.asInstanceOf[ParameterizedType]
.getActualTypeArguments // gives {Object.class}, not {int.class}
Is it possible to determine the value type used when extending from a generic superclass? I am loading classes from a jar file so it'd be best to achieve this only using a java.lang.Class instance.
In Java reflection you won't be able to obtain Int and other AnyVal types because they are handled specially by the compiler and if they are used generically, they will be represented by Object. However, you can use Scala reflection, and it is wholly possible to go from Java reflection to Scala reflection. Here's how:
import scala.reflect.runtime.universe._
class Base[T]
class Derived extends Base[Int]
object Main extends App {
val rm = runtimeMirror(getClass.getClassLoader) // whatever class loader you're using
val derivedSym = rm.staticClass(classOf[Derived].getName)
val baseSym = rm.staticClass(classOf[Base[_]].getName)
val TypeRef(_, _, params) = derivedSym.typeSignature.baseType(baseSym)
println(s"$derivedSym extends $baseSym[${params.mkString(", ")}]")
}
Unfortunately, unless you know exactly what you are searching for, you will have hard time finding proper documentation. I have found the answer on scala-users mailing list. Scala reflection is still experimental and, AFAIK, it will probably be superseded by a better one in future Scala versions.
I am trying to get my head around on what is best way to code this implementation. To give you example, here is my DAO handler code looks like
trait IDAOHandler[+T] {
def create[U <: AnyRef: Manifest](content: U): Try[String]
}
class MongoDAOHAndler extends IDAOHandler[+T]...
So I am creating actor that will handle all my persistence task that includes serializing the content and updating MongoDB database.
So I am using akka and the trick is in receive method, how do i handle generics type parameter. Even though my actor code is non-generic, but the messages it is going to receive will be generic type and based on content type in createDAO I was planning to get appropriate DAO handler (described aboe) and invoke the method.
case class createDAO[T](content: T) (implicit val metaInfo:TypeTag[T])
class CDAOActor(daofactory: DAOFactory) extends BaseActor {
def wrappedReceive = {
case x: createDAO[_] => pmatch(x)
}
def pmatch[A](c: createDAO[A]) {
//getting dao handler which will not work because it needs manifest
}
}
Let me know if there are any other ways to re-write this implementation.
You might already know this, but a little background just to be sure: In Scala (and Java) we have what is called type erasure, this means that the parametric types are used to verify the correctness of the code during compile time but is then removed (and "does not give a runtime cost", http://docs.oracle.com/javase/tutorial/java/generics/erasure.html). Pattern matching happens during runtime so the parametric types are already erased.
The good news is that you can make the Scala compiler keep the erased type by using TypeTag like you have done in your case class or ClassTag which contains less information but also keeps the erased type. You can get the erased type from the method .erasure (.runtimeClass in Scala 2.11) which will return the Java Class of the T type. You still wont be able to use that as the type parameter for a method call as that again happens compile time and you are now looking at that type in runtime, but what you can do is to compare this type during runtime with if/else or patternmatching.
So for example you could implement a method on your daofactory that takes a Class[_] parameter and returns a DAO instance for that class. In pmatch you would then take the erased type out of the tag and pass on to it.
Here is some more info about the tags, why they exist and how they work:
http://docs.scala-lang.org/overviews/reflection/typetags-manifests.html
I took a bit different approach, kind of dispatcher pattern, so here is the revised code
trait IDAOProcess
{
def process(daofactory:IDAOFactory,sender:ActorRef)
}
case class createDAO[T <: AnyRef : Manifest](content:T) (implicit val metaInfo:TypeTag[T]) extends IDAOProcess
{
def process(daofactory:IDAOFactory,sender:ActorRef)
{
for ( handler <- daofactory.getDAO[T] )
{
handler.create(content)
}
}
}
class DAOActor(daofactory:IDAOFactory) extends BaseActor
{
def wrappedReceive =
{
case x:IDAOProcess =>
{
x.process(daofactory,sender)
}
}
}
I have a simple type hierarchy like the following:
sealed abstract class Config
object Config {
case class Valid(name: String, traits: List[String]) extends Config
case class Invalid(error: String) extends Config
}
implicit val validFormat = jsonFormatFor(Config.Valid)
implicit val invalidFormat = jsonFormatFor(Config.Invalid)
I also have client code that does the following:
newHttpServer().addHandler("/config", extractConfig)
The extractConfig method performs some computations and returns either a Config.Valid or a Config.Invalid, which the server will automatically convert to json by using the implicit json format objects. My problem is that there is a compiler error because extractConfig returns a Config:
type mismatch; found : Config
required: spray.httpx.marshalling.ToResponseMarshallable
If I change the return type of extractConfig to Config.Valid then the server code compiles because jsonFormatFor(...) supplies the necessary automatic type conversion to make the respose a ToResponseMarshaller (though I admit I don't fully understand this automatic conversion as well, being somewhat new to scala). Is there a simple way to solve this by declaring that any subclass of Config must be a ToResponseMarshaller, given that ToResponseMarshaller is a trait that seems to be supplied via implicit conversions?
If you only have Config.Valid and Config.Invalid it should be sufficient that extractConfig returns an Either[Config.Valid, Config.Invalid]. Then your formats above should work.
Another possibility is to write your own jsonwriter (see this thread from the mailing list).
There seems to be a lot of enthusiasm among Scala bloggers lately for the type classes pattern, in which a simple class has functionality added to it by an additional class conforming to some trait or pattern. As a vastly oversimplified example, the simple class:
case class Wotsit (value: Int)
can be adapted to the Foo trait:
trait Foo[T] {
def write (t: T): Unit
}
with the help of this type class:
implicit object WotsitIsFoo extends Foo[Wotsit] {
def write (wotsit: Wotsit) = println(wotsit.value)
}
The type class is typically captured at compile time with implicts, allowing both the Wotsit and its type class to be passed together into a higher order function:
def writeAll[T] (items: List[T])(implicit tc: Foo[T]) =
items.foreach(w => tc.write(w))
writeAll(wotsits)
(before you correct me, I said it was an oversimplified example)
However, the use of implicits assumes that the precise type of the items is known at compile time. I find in my code this often isn't the case: I will have a list of some type of item List[T], and need to discover the correct type class to work on them.
The suggested approach of Scala would appear to be to add the typeclass argument at all points in the call hierarchy. This can get annoying as an the code scales and these dependencies need to be passed down increasingly long chains, through methods to which they are increasingly irrelevant. This makes the code cluttered and harder to maintain, the opposite of what Scala is for.
Typically this is where dependency injection would step in, using a library to supply the desired object at the point it's needed. Details vary with the library chosen for DI - I've written my own in Java in the past - but typically the point of injection needs to define precisely the object desired.
Trouble is, in the case of a type class the precise value isn't known at compile time. It must be selected based on a polymorphic description. And crucially, the type information has been erased by the compiler. Manifests are Scala's solution to type erasure, but it's far from clear to me how to use them to address this issue.
What techniques and dependency injection libraries for Scala would people suggest as a way of tackling this? Am I missing a trick? The perfect DI library? Or is this really the sticking point it seems?
Clarification
I think there are really two aspects to this. In the first case, the point where the type class is needed is reached by direct function calls from the point where the exact type of its operand is known, and so sufficient type wrangling and syntactic sugar can allow the type class to be passed to the point it's needed.
In the second case, the two points are separated by a barrier - such as an API that can't be altered, or being stored in a database or object store, or serialised and send to another computer - that means the type class can't be passed along with its operand. In this case, given an object whose type and value are known only at runtime, the type class needs somehow to be discovered.
I think functional programmers have a habit of assuming the first case - that with a sufficiently advanced language, the type of the operand will always be knowable. David and mkniessl provided good answers for this, and I certainly don't want to criticise those. But the second case definitely does exist, and that's why I brought dependency injection into the question.
A fair amount of the tediousness of passing down those implicit dependencies can be alleviated by using the new context bound syntax. Your example becomes
def writeAll[T:Foo] (items: List[T]) =
items.foreach(w => implicitly[Foo[T]].write(w))
which compiles identically but makes for nice and clear signatures and has fewer "noise" variables floating around.
Not a great answer, but the alternatives probably involve reflection, and I don't know of any library that will just make this automatically work.
(I have substituted the names in the question, they did not help me think about the problem)
I'll attack the problem in two steps. First I show how nested scopes avoid having to declare the type class parameter all the way down its usage. Then I'll show a variant, where the type class instance is "dependency injected".
Type class instance as class parameter
To avoid having to declare the type class instance as implicit parameter in all intermediate calls, you can declare the type class instance in a class defining a scope where the specific type class instance should be available. I'm using the shortcut syntax ("context bound") for the definition of the class parameter.
object TypeClassDI1 {
// The type class
trait ATypeClass[T] {
def typeClassMethod(t: T): Unit
}
// Some data type
case class Something (value: Int)
// The type class instance as implicit
implicit object SomethingInstance extends ATypeClass[Something] {
def typeClassMethod(s: Something): Unit =
println("SomthingInstance " + s.value)
}
// A method directly using the type class
def writeAll[T:ATypeClass](items: List[T]) =
items.foreach(w => implicitly[ATypeClass[T]].typeClassMethod(w))
// A class defining a scope with a type class instance known to be available
class ATypeClassUser[T:ATypeClass] {
// bar only indirectly uses the type class via writeAll
// and does not declare an implicit parameter for it.
def bar(items: List[T]) {
// (here the evidence class parameter defined
// with the context bound is used for writeAll)
writeAll(items)
}
}
def main(args: Array[String]) {
val aTypeClassUser = new ATypeClassUser[Something]
aTypeClassUser.bar(List(Something(42), Something(4711)))
}
}
Type class instance as writable field (setter injection)
A variant of the above which would be usable using setter injection. This time the type class instance is passed via a setter call to the bean using the type class.
object TypeClassDI2 {
// The type class
trait ATypeClass[T] {
def typeClassMethod(t: T): Unit
}
// Some data type
case class Something (value: Int)
// The type class instance (not implicit here)
object SomethingInstance extends ATypeClass[Something] {
def typeClassMethod(s: Something): Unit =
println("SomthingInstance " + s.value)
}
// A method directly using the type class
def writeAll[T:ATypeClass](items: List[T]) =
items.foreach(w => implicitly[ATypeClass[T]].typeClassMethod(w))
// A "service bean" class defining a scope with a type class instance.
// Setter based injection style for simplicity.
class ATypeClassBean[T] {
implicit var aTypeClassInstance: ATypeClass[T] = _
// bar only indirectly uses the type class via writeAll
// and does not declare an implicit parameter for it.
def bar(items: List[T]) {
// (here the implicit var is used for writeAll)
writeAll(items)
}
}
def main(args: Array[String]) {
val aTypeClassBean = new ATypeClassBean[Something]()
// "inject" the type class instance
aTypeClassBean.aTypeClassInstance = SomethingInstance
aTypeClassBean.bar(List(Something(42), Something(4711)))
}
}
Note that the second solution has the common flaw of setter based injection that you can forget to set the dependency and get a nice NullPointerException upon use...
The argument against type classes as dependency injection here is that with type classes the "precise type of the items is known at compile time" whereas with dependency injection, they are not. You might be interested in this Scala project rewrite effort where I moved from the cake pattern to type classes for dependency injection. Take a look at this file where the implicit declarations are made. Notice how the use of environment variables determines the precise type? That is how you can reconcile the compile time requirements of type classes with the run time needs of dependency injection.