In my test code I had this repetitive pattern of generating UUIDs in the companion objects, for example val userId: String = UUID.randomUUID().toString. In order to reduce writing this out each time, I've moved this to a trait:
trait UUIDGenerator {
def uuid(): UUID = UUID.randomUUID()
def stringifiedUUID(): String = uuid().toString
}
I then make my test classes' companion objects extend this trait, e.g.:
object MySpec extends UUIGenerator {
...
private val userId: String = stringifiedUUID()
}
My test class is not really relevant but they used userId a few times:
class MySpec extends Specification with AfterEach with BeforeAll {
...
// not really relevant
}
However the code now doesn't compile with the following error:
encountered unrecoverable cycle resolving import.
[error] Note: this is often due in part to a class depending on a definition nested within its companion.
[error] If applicable, you may wish to try moving some members into another object.
[error] import ca.vasigorc.MySpec._
What is the problem here, and most importantly what is the proper solution? Renaming all companion objects seems like a worse hack then the current repetition.
Related
I'm implementing a web service using Slick (3.3) for the DB layer. I am trying to make the Slick implementation as generic as possible, hoping to achieve DB-agnosticism as well as generic table, table query, and DAO classes that abstract over the services models as much as possible. I'm trying to do this combining several techniques:
A model hierarchy extending from a common base trait
A table hierarchy, extracting common columns for model traits (inspired by http://gavinschulz.com/posts/2016-01-30-common-model-fields-with-slick-3-part-i.html)
DB-agnosticism, relying on the JdbcProfile trait rather than any DB-specific profile implementation ( as described here: https://stackoverflow.com/a/31128239/4234254 and in the slick multi-db docs)
Cake pattern for dependency injection
I'm having trouble layering some of the schema elements, however, and not being a Scala type expert, I've been unable to figure out the solution on my own. I've created a reproduction of the issue, trying to minimalize it as much as possible, and using a mock slick library. The full code can be found here: https://github.com/anqit/slick_cake_minimal_error_repro/blob/master/src/main/scala/com/anqit/repro/Repro.scala but I'll go through it below.
My "slick" library and model classes:
abstract class Table[E]
type TableQuery[W <: Table[_]] = List[W] // not actually a list, but need a concrete type constructor to demonstrate the issue
object TableQuery {
def apply[W <: Table[_]]: TableQuery[W] = List[W]()
}
trait BaseEntity
case class SubEntityA() extends BaseEntity
case class SubEntityB() extends BaseEntity
Combining technique 2 in the list above with the cake pattern, I'm creating traits that wrap schema elements for each model. The base schema includes columns common to entity tables (e.g. id), and entity tables inherit from that:
trait BaseSchema[E <: BaseEntity] {
// provides common functionality
abstract class BaseTableImpl[E] extends Table[E]
def wrapper: TableQuery[_ <: BaseTableImpl[E]]
}
// functionality specific to SubEntityA
trait SchemaA extends BaseSchema[SubEntityA] {
class TableA extends BaseTableImpl[SubEntityA]
// this definition compiles fine without a type annotation
val queryA = TableQuery[TableA]
def wrapper = queryA
}
// functionality specific to SubEntityB that depends on SchemaA
trait SchemaB extends BaseSchema[SubEntityB] { self: SchemaA =>
class TableB extends BaseTableImpl[SubEntityB] {
// uses SchemaA's queryA to make a FK
}
/*
attempting to define wrapper here without a type annotation results in the following compilation error (unlike defining wrapper for SchemaA above):
def wrapper = Wrapper[WrappedB]
type mismatch;
[error] found : Repro.this.Wrapper[SubB.this.WrappedB]
[error] (which expands to) List[SubB.this.WrappedB]
[error] required: Repro.this.Wrapper[_ <: SubB.this.BaseWrapMeImpl[_1]]
[error] (which expands to) List[_ <: SubB.this.BaseWrapMeImpl[_1]]
[error] def wrapper = Wrapper[WrappedB]
[error] ^
it does, however, compile if defined with an explicit type annotation as below
*/
val queryB = TableQuery[TableB]
def wrapper: TableQuery[TableB] = queryB
}
This is where I get my first error, a type mismatch, that I have currently worked around using an explicit type annotation, but I suspect it is related to the main error, stay tuned.
A base DAO, that will provide common query methods:
trait BaseDao[E <: BaseEntity] { self: BaseSchema[E] => }
And finally, putting all the cake layers together:
// now, the actual injection of the traits
class DaoA extends SchemaA
with BaseDao[SubEntityA]
// so far so good...
class DaoB extends SchemaA
with SchemaB
with BaseDao[SubEntityB] // blargh! failure! :
/*
illegal inheritance;
[error] self-type Repro.this.DaoB does not conform to Repro.this.BaseDao[Repro.this.SubEntityB]'s selftype Repro.this.BaseDao[Repro.this.SubEntityB] with Repro.this.BaseSchema[Repro.this.SubEntityB]
[error] with BaseDao[SubEntityB]
[error] ^
*/
The first error (the type mismatch in SchemaB), I'm completely at a loss. One of the few tricks in my bag is to add explicit type annotations when I run in to type-related errors in Scala, which is the only reason I tried that, and got it to compile. I would love an explanation as to why that is happening, and I suspect fixing my code such that I can write that without the type would probably help me with the second error. Which brings me to... the second error. To me, it looks like I've included all of the necessary traits to satisfy the self-type tree, but I guess not. My guess is that SchemaB, while extending BaseSchema[SubEntityB], is somehow not being recognized as a BaseSchema[SubEntityB]? Have I not set up my hierarchy properly? Or maybe I need to use bounds instead of strict type references?
The git repo that contains the issue can be found here https://github.com/mdedetrich/scalacache-example
The problem that I currently have is that I am trying to make my ScalaCache backend agnostic with it being configurable at runtime using typesafe config.
The issue I have is that ScalaCache parameterizes the constructors of the cache, i.e. to construct a Caffeine cache you would do
ScalaCache(CaffeineCache())
where as for a SentinelRedisCache you would do
ScalaCache(SentinelRedisCache("", Set.empty, ""))
In my case, I have created a generic cache wrapper called MyCache as shown below
import scalacache.ScalaCache
import scalacache.serialization.Codec
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr])(
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
We need to carry the CacheRepr along because this is how ScalaCache knows how to serialize any type T. CaffeineCache uses a CacheRepr which is InMemoryRepr where as SentinelRedisCache uses a CacheRepr which is Array[Byte].
And this is where the crux of the problem is, I have an Config which just stores which cache is being used, i.e.
import scalacache.Cache
import scalacache.caffeine.CaffeineCache
import scalacache.redis.SentinelRedisCache
final case class ApplicationConfig(cache: Cache[_])
The reason why its a Cache[_] is because at compile time we don't know what cache is being used, ApplicationConfig will be instantiated at runtime with either CaffeineCache/SentinelRedisCache.
And this is where the crux of the problem is, Scala is unable to find an implicit Codec for the wildcard type if we just us applicationConfig.cache as a constructor, i.e. https://github.com/mdedetrich/scalacache-example/blob/master/src/main/scala/Main.scala#L17
If we uncomment the above line, we get
[error] /Users/mdedetrich/github/scalacache-example/src/main/scala/Main.scala:17:37: Could not find any Codecs for type Int and _$1. Please provide one or import scalacache._
[error] Error occurred in an application involving default arguments.
[error] val myCache3: MyCache[_] = MyCache(ScalaCache(applicationConfig.cache)) // This doesn't
Does anyone know how to solve this problem, essentially I want to specify that in my ApplicationConfig, cache is of type Cache[InMemoryRepr | Array[Byte]] rather than just Cache[_] (so that the Scala compiler knows to look up implicits of either InMemoryRepr or Array[Byte] and for MyCache to be defined something like this
final case class MyCache[CacheRepr <: InMemoryRepr | Array[Byte]](scalaCache: ScalaCache[CacheRepr])
You seem to be asking for the compiler to resolve implicit values based on the run-time selection of the cache type. This is not possible because the compiler is no longer running by the time the application code starts.
You have to make the type resolution happen at compile time, not run time. So you need to define a trait the represents the abstract interface to the cache and provide a factory function that returns a specific instance based on the setting in ApplicationConfig. It might look something like this (untested):
sealed trait MyScalaCache {
def putInt(value: Int)
}
object MyScalaCache {
def apply(): MyScalaCache =
if (ApplicationConfig.useCaffine) {
MyCache(ScalaCache(CaffeineCache())
} else {
MyCache(ScalaCache(SentinelRedisCache("", Set.empty, ""))
}
}
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr]) extends MyScalaCache (
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
The compiler will resolve the implicit in MyCache at compile time where the two concrete instances are specified in apply.
I'm writing a set of implicit Scala wrapper classes for an existing Java library (so that I can decorate that library to make it more convenient for Scala developers).
As a trivial example, let's say that the Java library (which I can't modify) has a class such as the following:
public class Value<T> {
// Etc.
public void setValue(T newValue) {...}
public T getValue() {...}
}
Now let's say I want to decorate this class with Scala-style getters and setters. I can do this with the following implicit class:
final implicit class RichValue[T](private val v: Value[T])
extends AnyVal {
// Etc.
def value: T = v.getValue
def value_=(newValue: T): Unit = v.setValue(newValue)
}
The implicit keyword tells the Scala compiler that it can convert instances of Value to be instances of RichValue implicitly (provided that the latter is in scope). So now I can apply methods defined within RichValue to instances of Value. For example:
def increment(v: Value[Int]): Unit = {
v.value = v.value + 1
}
(Agreed, this isn't very nice code, and is not exactly functional. I'm just trying to demonstrate a simple use case.)
Unfortunately, Scala does not allow implicit classes to be top-level, so they must be defined within a package object, object, class or trait and not just in a package. (I have no idea why this restriction is necessary, but I assume it's for compatibility with implicit conversion functions.)
However, I'm also extending RichValue from AnyVal to make this a value class. If you're not familiar with them, they allow the Scala compiler to make allocation optimizations. Specifically, the compiler does not always need to create instances of RichValue, and can operate directly on the value class's constructor argument.
In other words, there's very little performance overhead from using a Scala implicit value class as a wrapper, which is nice. :-)
However, a major restriction of value classes is that they cannot be defined within a class or a trait; they can only be members of packages, package objects or objects. (This is so that they do not need to maintain a pointer to the outer class instance.)
An implicit value class must honor both sets of constraints, so it can only be defined within a package object or an object.
And therein lies the problem. The library I'm wrapping contains a deep hierarchy of packages with a huge number of classes and interfaces. Ideally, I want to be able to import my wrapper classes with a single import statement, such as:
import mylib.implicits._
to make using them as simple as possible.
The only way I can currently see of achieving this is to put all of my implicit value class definitions inside a single package object (or object) within a single source file:
package mylib
package object implicits {
implicit final class RichValue[T](private val v: Value[T])
extends AnyVal {
// ...
}
// Etc. with hundreds of other such classes.
}
However, that's far from ideal, and I would prefer to mirror the package structure of the target library, yet still bring everything into scope via a single import statement.
Is there a straightforward way of achieving this that doesn't sacrifice any of the benefits of this approach?
(For example, I know that if I forego making these wrappers value classes, then I can define them within a number of different traits - one for each component package - and have my root package object extend all of them, bringing everything into scope through a single import, but I don't want to sacrifice performance for convenience.)
implicit final class RichValue[T](private val v: Value[T]) extends AnyVal
Is essentially a syntax sugar for the following two definitions
import scala.language.implicitConversions // or use a compiler flag
final class RichValue[T](private val v: Value[T]) extends AnyVal
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
(which, you might see, is why implicit classes have to be inside traits, objects or classes: they also have matching def)
There is nothing that requires those two definitions to live together. You can put them into separate objects:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// lots of implementation code here
}
}
object implicits {
#inline implicit def RichValue[T](v: Value[T]): wrappedLibValues.RichValue[T] = new wrappedLibValues.RichValue(v)
}
Or into traits:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// implementation here
}
trait Conversions {
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
}
}
object implicits extends wrappedLibValues.Conversions
Can someone help me to understand what is wrong with the code below? The problem is inside the "join" method - I am not able to set "state" field. Error message is -
No implicit view available from code.model.Membership.MembershipState.Val => _14.MembershipState.Value.
[error] create.member(user).group(group).state(MembershipState.Accepted).save
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
What does _14 mean? I tried similar thing with MappedGender and it works as expected, so why MappedEnum fails?
scala 2.10
lift 2.5
Thanks
package code
package model
import net.liftweb.mapper._
import net.liftweb.util._
import net.liftweb.common._
class Membership extends LongKeyedMapper[Membership] with IdPK {
def getSingleton = Membership
object MembershipState extends Enumeration {
val Requested = new Val(1, "Requested")
val Accepted = new Val(2, "Accepted")
val Denied = new Val(3, "Denied")
}
object state extends MappedEnum(this, MembershipState)
{
override def defaultValue = MembershipState.Requested
}
object member extends MappedLongForeignKey(this, User) {
override def dbIndexed_? = true
}
object group extends MappedLongForeignKey(this, Group) {
override def dbIndexed_? = true
}
}
object Membership extends Membership with LongKeyedMetaMapper[Membership] {
def join (user : User, group : Group) = {
create.member(user).group(group).state(MembershipState.Accepted).save
}
}
Try moving your MembershipState enum outside of the MembershipClass. I was getting the same error as you until I tried this. Not sure why, but the code compiled after I did that.
_14 means a compiler-generated intermediate anonymous value. In other words, the compiler doesn't know how to express the type it's looking in a better way.
But if you look past that, you see the compiler is looking for a conversion from [...].Val to [...].Value. I would guess that changing
val Requested = new Val(1, "Requested")
to
val Requested = Value(1, "Requested")
would fix the error.
(I'm curious where you picked up the "new Val" style?)
What's strange is that Val actually extends Value. So if the outer type was known correctly (not inferred to the odd _14) Val vs. Value wouldn't be a problem. The issue here is that Lift from some reason defines the setters to take the now-deprecated view bound syntax. Perhaps this causes the compiler, rather than going in a straight line and trying to fit the input type into the expected type, instead to start from both ends, defining the starting type and the required type, and then start searching for an implicit view function that can reconcile the two.
I've looked at example of logging in Scala, and it usually looks like this:
import org.slf4j.LoggerFactory
trait Loggable {
private lazy val logger = LoggerFactory.getLogger(getClass)
protected def debug(msg: => AnyRef, t: => Throwable = null): Unit =
{...}
}
This seems independent of the concrete logging framework. While this does the job, it also introduces an extraneous lazy val in every instance that wants to do logging, which might well be every instance of the whole application. This seems much too heavy to me, in particular if you have many "small instances" of some specific type.
Is there a way of putting the logger in the object of the concrete class instead, just by using inheritance? If I have to explicitly declare the logger in the object of the class, and explicitly refer to it from the class/trait, then I have written almost as much code as if I had done no reuse at all.
Expressed in a non-logging specific context, the problem would be:
How do I declare in a trait that the implementing class must have a singleton object of type X, and that this singleton object must be accessible through method def x: X ?
I can't simply define an abstract method, because there could only be a single implementation in the class. I want that logging in a super-class gets me the super-class singleton, and logging in the sub-class gets me the sub-class singleton. Or put more simply, I want logging in Scala to work like traditional logging in Java, using static loggers specific to the class doing the logging. My current knowledge of Scala tells me that this is simply not possible without doing it exactly the same way you do in Java, without much if any benefits from using the "better" Scala.
Premature Optimization is the root of all evil
Let's be clear first about one thing: if your trait looks something like this:
trait Logger { lazy val log = Logger.getLogger }
Then what you have not done is as follows:
You have NOT created a logger instance per instance of your type
You have neither given yourself a memory nor a performance problem (unless you have)
What you have done is as follows:
You have an extra reference in each instance of your type
When you access the logger for the first time, you are probably doing some map lookup
Note that, even if you did create a separate logger for each instance of your type (which I frequently do, even if my program contains hundreds of thousands of these, so that I have very fine-grained control over my logging), you almost certainly still will neither have a performance nor a memory problem!
One "solution" is (of course), to make the companion object implement the logger interface:
object MyType extends Logger
class MyType {
import MyType._
log.info("Yay")
}
How do I declare in a trait that the
implementing class must have a
singleton object of type X, and that
this singleton object must be
accessible through method def x: X ?
Declare a trait that must be implemented by your companion objects.
trait Meta[Base] {
val logger = LoggerFactory.getLogger(getClass)
}
Create a base trait for your classes, sub-classes have to overwrite the meta method.
trait Base {
def meta: Meta[Base]
def logger = meta.logger
}
A class Whatever with a companion object:
object Whatever extends Meta[Base]
class Whatever extends Base {
def meta = Whatever
def doSomething = {
logger.log("oops")
}
}
In this way you only need to have a reference to the meta object.
We can use the Whatever class like this.
object Sample {
def main(args: Array[String]) {
val whatever = new Whatever
whatever.doSomething
}
}
I'm not sure I understand your question completely. So I apologize up front if this is not the answer you are looking for.
Define an object were you put your logger into, then create a companion trait.
object Loggable {
private val logger = "I'm a logger"
}
trait Loggable {
import Loggable._
def debug(msg: String) {
println(logger + ": " + msg)
}
}
So now you can use it like this:
scala> abstract class Abstraction
scala> class Implementation extends Abstraction with Loggable
scala> val test = new Implementation
scala> test.debug("error message")
I'm a logger: error message
Does this answer your question?
I think you cannot automatically get the corresponding singleton object of a class or require that such a singleton exists.
One reason is that you cannot know the type of the singleton before it is defined. Not sure, if this helps or if it is the best solution to your problem, but if you want to require some meta object to be defined with a specific trait, you could define something like:
trait HasSingleton[Traits] {
def meta: Traits
}
trait Log {
def classname: String
def log { println(classname) }
}
trait Debug {
def debug { print("Debug") }
}
class A extends HasSingleton[Log] {
def meta = A // Needs to be defined with a Singleton (or any object which inherits from Log}
def f {
meta.log
}
}
object A extends Log {
def classname = "A"
}
class B extends HasSingleton[Log with Debug] { // we want to use Log and Debug here
def meta = B
def g {
meta.log
meta.debug
}
}
object B extends Log with Debug {
def classname = "B"
}
(new A).f
// A
(new B).g
// B
// Debug