Can someone help me to understand what is wrong with the code below? The problem is inside the "join" method - I am not able to set "state" field. Error message is -
No implicit view available from code.model.Membership.MembershipState.Val => _14.MembershipState.Value.
[error] create.member(user).group(group).state(MembershipState.Accepted).save
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
What does _14 mean? I tried similar thing with MappedGender and it works as expected, so why MappedEnum fails?
scala 2.10
lift 2.5
Thanks
package code
package model
import net.liftweb.mapper._
import net.liftweb.util._
import net.liftweb.common._
class Membership extends LongKeyedMapper[Membership] with IdPK {
def getSingleton = Membership
object MembershipState extends Enumeration {
val Requested = new Val(1, "Requested")
val Accepted = new Val(2, "Accepted")
val Denied = new Val(3, "Denied")
}
object state extends MappedEnum(this, MembershipState)
{
override def defaultValue = MembershipState.Requested
}
object member extends MappedLongForeignKey(this, User) {
override def dbIndexed_? = true
}
object group extends MappedLongForeignKey(this, Group) {
override def dbIndexed_? = true
}
}
object Membership extends Membership with LongKeyedMetaMapper[Membership] {
def join (user : User, group : Group) = {
create.member(user).group(group).state(MembershipState.Accepted).save
}
}
Try moving your MembershipState enum outside of the MembershipClass. I was getting the same error as you until I tried this. Not sure why, but the code compiled after I did that.
_14 means a compiler-generated intermediate anonymous value. In other words, the compiler doesn't know how to express the type it's looking in a better way.
But if you look past that, you see the compiler is looking for a conversion from [...].Val to [...].Value. I would guess that changing
val Requested = new Val(1, "Requested")
to
val Requested = Value(1, "Requested")
would fix the error.
(I'm curious where you picked up the "new Val" style?)
What's strange is that Val actually extends Value. So if the outer type was known correctly (not inferred to the odd _14) Val vs. Value wouldn't be a problem. The issue here is that Lift from some reason defines the setters to take the now-deprecated view bound syntax. Perhaps this causes the compiler, rather than going in a straight line and trying to fit the input type into the expected type, instead to start from both ends, defining the starting type and the required type, and then start searching for an implicit view function that can reconcile the two.
Related
The git repo that contains the issue can be found here https://github.com/mdedetrich/scalacache-example
The problem that I currently have is that I am trying to make my ScalaCache backend agnostic with it being configurable at runtime using typesafe config.
The issue I have is that ScalaCache parameterizes the constructors of the cache, i.e. to construct a Caffeine cache you would do
ScalaCache(CaffeineCache())
where as for a SentinelRedisCache you would do
ScalaCache(SentinelRedisCache("", Set.empty, ""))
In my case, I have created a generic cache wrapper called MyCache as shown below
import scalacache.ScalaCache
import scalacache.serialization.Codec
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr])(
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
We need to carry the CacheRepr along because this is how ScalaCache knows how to serialize any type T. CaffeineCache uses a CacheRepr which is InMemoryRepr where as SentinelRedisCache uses a CacheRepr which is Array[Byte].
And this is where the crux of the problem is, I have an Config which just stores which cache is being used, i.e.
import scalacache.Cache
import scalacache.caffeine.CaffeineCache
import scalacache.redis.SentinelRedisCache
final case class ApplicationConfig(cache: Cache[_])
The reason why its a Cache[_] is because at compile time we don't know what cache is being used, ApplicationConfig will be instantiated at runtime with either CaffeineCache/SentinelRedisCache.
And this is where the crux of the problem is, Scala is unable to find an implicit Codec for the wildcard type if we just us applicationConfig.cache as a constructor, i.e. https://github.com/mdedetrich/scalacache-example/blob/master/src/main/scala/Main.scala#L17
If we uncomment the above line, we get
[error] /Users/mdedetrich/github/scalacache-example/src/main/scala/Main.scala:17:37: Could not find any Codecs for type Int and _$1. Please provide one or import scalacache._
[error] Error occurred in an application involving default arguments.
[error] val myCache3: MyCache[_] = MyCache(ScalaCache(applicationConfig.cache)) // This doesn't
Does anyone know how to solve this problem, essentially I want to specify that in my ApplicationConfig, cache is of type Cache[InMemoryRepr | Array[Byte]] rather than just Cache[_] (so that the Scala compiler knows to look up implicits of either InMemoryRepr or Array[Byte] and for MyCache to be defined something like this
final case class MyCache[CacheRepr <: InMemoryRepr | Array[Byte]](scalaCache: ScalaCache[CacheRepr])
You seem to be asking for the compiler to resolve implicit values based on the run-time selection of the cache type. This is not possible because the compiler is no longer running by the time the application code starts.
You have to make the type resolution happen at compile time, not run time. So you need to define a trait the represents the abstract interface to the cache and provide a factory function that returns a specific instance based on the setting in ApplicationConfig. It might look something like this (untested):
sealed trait MyScalaCache {
def putInt(value: Int)
}
object MyScalaCache {
def apply(): MyScalaCache =
if (ApplicationConfig.useCaffine) {
MyCache(ScalaCache(CaffeineCache())
} else {
MyCache(ScalaCache(SentinelRedisCache("", Set.empty, ""))
}
}
final case class MyCache[CacheRepr](scalaCache: ScalaCache[CacheRepr]) extends MyScalaCache (
implicit stringCodec: Codec[Int, CacheRepr]) {
def putInt(value: Int) = scalaCache.cache.put[Int]("my_int", value, None)
}
The compiler will resolve the implicit in MyCache at compile time where the two concrete instances are specified in apply.
For Example:
object CampaignTypes extends Enumeration {
type CampaignType = Value
val ABC,DEF = Value
}
object campaignTypeId extends EnumNameField(this, CampaignTypes) {
override val defaultValue = CampaignTypes.ABC
}
IntelliJ underlines CampaignTypes.ABC in red with message Expression of type CampaignTypes.Value doesn't conform to expected type EnumType#Value
The code compiles & works. But, IntelliJ marks it as an error, making it difficult to read code (as there are many other cases, which is also not resolved by IntelliJ). The right Scala plugin is also used. Is there a way to resolve this?
Another example w.r.t methods defined on a BsonRecord
sealed trait Product {...}
class Document extends BsonRecord[Document] {
object productType extends StringField(this, 20)
....
def toTyped: Option[Product] = this.productType.get match {//something which returns an Option[Product] from a List[Product]}
}
object documents extends BsonRecordListField(this, Document) {
def toProducts: Set[Product] =
this.get.flatMap(_.toTyped)(breakOut) //Cannot resolve symbol toTyped
}
Yes, you can help Intellij by providing type hints :
object campaignTypeId extends EnumNameField[A, CampaignsTypes.type](this, CampaignTypes) {
override val defaultValue = CampaignTypes.ABC
}
where A is the type of the encapsulating class/object
For a general solution :
Try to always provide generic types instead of relying on inference, as you can see the one in Intellij IDEA is not great in diffucult cases.
If that doesn't cut it, try to provide compiler hints. Lift relies on Manifest, but it can be classTag or other things. Expliciting this kind of implicits can help IDEA resolve types correctly
For BsonRecordListField, expliciting generic types should solve it too.
I'm a newbie to Scala and I'm facing an issue I can't understand and solve. I have written a generic trait which is this one:
trait DistanceMeasure[P<:DbScanPoint] {
def distance(p1:P, p2:P):Double
}
where DbScanPoint is simply:
trait DbScanPoint extends Serializable {}
Then I have the following two classes extending them:
class Point2d (id:Int, x:Double, y:Double) extends DbScanPoint {
def getId() = id
def getX() = x
def getY() = y
}
class EuclideanDistance extends DistanceMeasure[Point2d] with Serializable {
override def distance(p1:Point2d,p2:Point2d) = {
(p1.getX()-p2.getX())*(p1.getX()-p2.getX()) + (p1.getY()-p2.getY()) * (p1.getY()-p2.getY())
}
}
And at the end I have this class:
class DBScanSettings {
var distanceMeasure:DistanceMeasure[_<:DbScanPoint] = new EuclideanDistance
//...
}
My problem is that if that when I write in my test main this:
val dbScanSettings = new DBScanSettings()
dbScanSettings.distanceMeasure.distance(new Point2d(1,1,1), new Point2d(2,2,2))
I get the following compiling error:
type mismatch;
[error] found : it.polito.dbdmg.ontic.point.Point2d
[error] required: _$1 where type _$1 <: it.polito.dbdmg.ontic.point.DbScanPoint
I can't understand which is the problem. I have done a very similar thing with other classes and I got no error, so the reason of this error is quite obscure to me.
May somebody help me?
Thanks.
UPDATE
I managed to do what I needed by changing the code to:
trait DistanceMeasure {
def distance(p1:DbScanPoint, p2:DbScanPoint):Double
}
And obviously making all the related changes.
The heart of your problem is that you are defining your distanceMeasure var with an existential type, so to the compiler that type is not completely known. Then, you are calling distance which is to take two instances of type P <: DbScanPoint passing in two Point2d instances. Now, these are the correct types for the concrete class behind distanceMeasure (a new EuclideanDistance), but the way you defined distanceMeasure (with an existential), the compiler cannot enforce that Point2d instances are the right type that the concrete underlying DistanceMeasure takes.
Say for arguments sake that instead of a new EuclideanDistance, you instead instantiated a completely different impl of DistanceMeasure that did not take Point2d instances and then tried to call distance he way you have it here. If the compiler can't enforce that the underlying class accepts the arguments supplied, it's going to complain like this.
There are a bunch of ways to fix this, and the solution ultimately depends on the flexibility you need in your class structure. One possible way is like so:
trait DBScanSettings[P <: DbScanPoint] {
val distanceMeasure:DistanceMeasure[P]
//...
}
class Point2dScanSettings extends DBScanSettings[Point2d]{
val distanceMeasure = new EuclideanDistance
}
And then to test:
val dbScanSettings = new Point2dScanSettings()
dbScanSettings.distanceMeasure.distance(new Point2d(1,1,1), new Point2d(2,2,2))
But without me really understanding you requirements for what levels of abstraction you need, it's going to be up to you to define the restructure.
Ok so the real reason I ran into this is that my ScalaTests failed to compile, because I defined some of the classes inside of the test scope that call another class file expecting to work with TypeTags. Notice that because class B is within my "test" (pretend this is a scala test call) , typetag no longer becomes viable. I suspect maybe I shouldn't be attempting this on an anonymous class inside of a local scope, but could someone help me understand please? Thanks
import scala.reflect.runtime.universe._
import scala.Symbol
class TypeTagger[T:TypeTag] {
val tt = typeTag[T]
}
object TypeTagger {
def apply[T]()(implicit tt:TypeTag[T]) = new TypeTagger[T]
}
object TestRunTypeTagger extends App {
class A
val test = new TypeTagger[A]
{
class B
val test2 = TypeTagger[B]() //fails
}
}
Error:
No TypeTag available for B
val test2 = TypeTaggerB
^ not enough arguments for method apply: (implicit tt:
reflect.runtime.universe.TypeTag[B])chorle.scala.testarea.TypeTagger[B]
in object TypeTagger. Unspecified value parameter tt.
val test2 = TypeTaggerB
^
It seems to work with a WeakTypeTag instead of a TypeTag (also change typeTag to weakTypeTag). I have no idea why really; couldn't find any documentation about this specifically.
I have something like this in scala:
abstract class Point[Type](n: String){
val name = n
var value: Type = _
}
So far so good. The problem comes in a class that extends Point.
case class Input[Type](n:String) extends Point(n){
def setValue(va: Type) = value = va
}
On the setValue line I have this problem:
[error] type mismatch;
[error] found : va.type (with underlying type Type)
[error] required: Nothing
[error] def setValue(va: Type) = value = va
I have tried to initialize with null and null.asInstanceOf[Type] but the result is the same.
How can I initialize value so it can be used in setValue?
You should specify that Input implements Point with the generic type Type because for now, as it is not specified, it is considered as Nothing (I guess the compiler can't infer it from the setValue method). So you have to do the following:
case class Input[Type](n:String) extends Point[Type](n){
def setValue(va: Type) = value = va
}
More information
I answered this question for the compilation error (it does compile on scala 2.9.0.1). Moreover I saw this case class as the implementation for an existing type, like 'Int'. The usage of _ is of course a bad idea in the abstract class, however it is not prohibited, but the _ is not always a null, it is the default value, for exemple: var x:Int = _ will assign the value 0 to x.
Try the following:
package inputabstraction
abstract class Point[T](n:String){
def value: T
val name = n
}
case class Input[T](n:String, value:T) extends Point[T](n)
object testTypedCaseClass{
def test(){
val foo = Input("foo", "bar")
println(foo)
}
}
A simple Application to check that it works:
import inputabstraction._
object TestApp extends Application{
testTypedCaseClass.test()
}
Explanation
The first mistake you are making is case class Input[Type](n:String) extends Point(n){. Point is a typed class, and so when you are calling the superclass constructor with extends Point(n) you need to specify the type of Point. This is done like this: extends Point[T](n), where T is the Type you are planning to use.
The second error is that you are both defining and declaring value:T here: var value: Type = _. In this statement, _ is a value. Its value is Nothing. The scala compiler infers from this that Point[T] is Point[Nothing]. Thus when you attempt to set it to a type in the body of your setValue method, you must set it to Nothing, which is probably not what you want. If you attempt to set it to anything besides Nothing, you will get the type mismatch from above, because value is typed as Nothing due to your use of _.
The third mistake is using var instead of val or def. val and def can be overridden interchangeably, which means that subtypes can override with either val or def, and the scala compiler will figure it out for you. It is best practice to define vals as functions using def in abstract classes and traits, because the initialization order of subtype constructors is a very difficult thing to get right (there is an algorithm for how the compiler decides how to construct a class from its supertypes). TL#DR === use def in supertypes. Case class parameters are automatically generate val fields, which, since you are extending a Point, will create a val value field that overrides the def value field in Point[T].
You can get away with all this Type||T abstraction in Scala because of type inference and the fact that Point is abstract, therefore making value extendable via val.
The preferred way of doing dependency injection like this is the cake pattern, but this example I have provided works for your use-case.