Why do I get "expected class or object definition" when defining a type in scala? - scala

If I write something like this (to define Slick tables as per docs):
type UserIdentity = (String, String)
class UserIdentity(tag: Tag){
...
}
I get a compile error: "expected class or object definition" pointing to the type declaration. Why?

You can't define type aliases outside of a class, trait, or object definition.
If you want a type alias available at the package level (so you don't have to explicitly import it), the easiest way around this to define a package object, which has the same name as a package and allows you to define anything inside of it, including type aliases.
So if you have a foo.barpackage and you wish to add a type alias, do this:
package foo
package object bar {
type UserIdentity = (String, String)
}
//in another file
package foo.bar
val x: UserIdentity = ...

Related

Unable to declare functor type that takes zero parameters?

I'm trying to make a type definition for the function type () => Unit, I use this signature quite a bit for cleanup callback functions, and I'd like to give them more meaningful names.
I've tried the following, which I think should be correct syntax, but it doesn't compile:
package myPackage
import stuff
type CleanupCallback = () => Unit
trait myTrait ...
class mObject ...
Why doesn't it compile? And what is the correct syntax?
The compilation error is: expected class or object definition
You can't declare type alias out of class/trait/object scope. But you can declare it in package object as follows:
package object myPackage {
type CleanupCallback = () => Unit
}
It will be visible for all classes in myPackage.
Also you can import it in other classes which belong to other packages:
import myPackage.CleanupCallback
trait MyTrait {
def foo: CleanupCallBack
}
IDEA plugin supports creation of package objects, another version is (suppose you don't have IDEA plugin):
Create file package.scala in your package. The file must contain:
package object packageName { // name must match with package name
// ...
}

ficus configuration load generic

Loading a ficus configuration like
loadConfiguration[T <: Product](): T = {
import net.ceedubs.ficus.readers.ArbitraryTypeReader._
import net.ceedubs.ficus.Ficus._
val config: Config = ConfigFactory.load()
config.as[T]
fails with:
Cannot generate a config value reader for type T, because it has no apply method in a companion object that returns type T, and it doesn't have a primary constructor
when instead directly specifying a case class instead of T i.e. SomeClass it works just fine. What am I missing here?
Ficus uses the type class pattern, which allows you to constrain generic types by specifying operations that must be available for them. Ficus also provides type class instance "derivation", which in this case is powered by a macro that can inspect the structure of a specific case class-like type and automatically create a type class instance.
The problem in this case is that T isn't a specific case class-like type—it's any old type that extends Product, which could be something nice like this:
case class EasyToDecode(a: String, b: String, c: String)
But it could also be:
trait X extends Product {
val youWillNeverDecodeMe: String
}
The macro you've imported from ArbitraryTypeReader has no idea at this point, since T is generic here. So you'll need a different approach.
The relevant type class here is ValueReader, and you could minimally change your code to something like the following to make sure T has a ValueReader instance (note that the T: ValueReader syntax here is what's called a "context bound"):
import net.ceedubs.ficus.Ficus._
import net.ceedubs.ficus.readers.ValueReader
import com.typesafe.config.{ Config, ConfigFactory }
def loadConfiguration[T: ValueReader]: T = {
val config: Config = ConfigFactory.load()
config.as[T]
}
This specifies that T must have a ValueReader instance (which allows us to use .as[T]) but says nothing else about T, or about where its ValueReader instance needs to come from.
The person calling this method with a concrete type MyType then has several options. Ficus provides instances that are automatically available everywhere for many standard library types, so if MyType is e.g. Int, they're all set:
scala> ValueReader[Int]
res0: net.ceedubs.ficus.readers.ValueReader[Int] = net.ceedubs.ficus.readers.AnyValReaders$$anon$2#6fb00268
If MyType is a custom type, then either they can manually define their own ValueReader[MyType] instance, or they can import one that someone else has defined, or they can use generic derivation (which is what ArbitraryTypeReader does).
The key point here is that the type class pattern allows you as the author of a generic method to specify the operations you need, without saying anything about how those operations will be defined for a concrete type. You just write T: ValueReader, and your caller imports ArbitraryTypeReader as needed.

Create a companion object that mixes in a trait that defines a method which returns an object of the object's companion class

Abstract problem: Create a trait that can be mixed into the companion object of a class, to give that object a method that returns an object of that class.
Concrete problem: I'm trying to create a bunch of classes for use with RESTful service calls, that know how to serialize and de-serialize themselves, like so:
case class Foo
(
var bar : String,
var blip : String
)
extends SerializeToJson
object Foo extends DeserializeFromJson
The intended usage is like so:
var f = Foo( "abc","123" )
var json = f.json
var newF = Foo.fromJson( json )
I'm using Genson to do the serialization/deserialization, which I access through a global object:
object JSON {
val parser = new ScalaGenson( new GensonBuilder() <...> )
}
Then I define the traits like so:
trait SerializeToJson {
def json : String = JSON.parser.toJson(this)
}
trait DeserializeFromJson[T <: DeserializeFromJson[T]] {
def fromJson( json : String ) : T = JSON.parser.fromJson( json )
}
This compiles. But this does not:
object Foo extends DeserializeFromJson[Foo]
I get the following error message:
type arguments [Foo] do not conform to trait DeserializeFromJson's
type parameter bounds [T <: DeserializeFromJson[T]]
I've tried creating a single trait, like so:
trait JsonSerialization[T <: JsonSerialization[T]] {
def json(implicit m: Manifest[JsonSerialization[T]]) : String =
JSON.parser.toJson(this)(m)
def fromJson( json : String ) : T =
JSON.parser.fromJson(json)
}
Now, if I just declare case class Foo (...) extends JsonSerialization[Foo] then I can't call Foo.fromJson because only an instance of class Foo has that method, not the companion object.
If I declare object Foo extend JsonSerialization[Foo] then I can compile and Foo has a .fromJson method. But at run time, the call to fromJson thinks that T is a JsonSerialization, and not a Foo, or so the following run-time error suggests:
java.lang.ClassCastException: scala.collection.immutable.HashMap$HashTrieMap cannot be cast to ...JsonSerialization
at ...JsonSerialization$class.fromJson(DataModel.scala:14)
at ...Foo.fromJson(Foo.scala:6)
And I can't declare object Foo extends Foo because I get
module extending its companion class cannot use default constructor arguments
So I can try adding constructor parameters, and that compiles and runs, but again the run-time type when it tries to deserialize is wrong, giving me the above error.
The only thing I've been able to do that works is to define fromJson in every companion object. But there MUST be a way to define it in a trait, and just mix in that trait. Right?
The solution is to simplify the type parameter for the trait.
trait DeserializeFromJson[T] {
def fromJson( json : String )(implicit m : Manifest[T]) : T =
JSON.parser.fromJson[T](json)(m)
}
Now, the companion object can extend DeserializeFromJson[Foo] and when I call Foo.fromJson( json ) it is able to tell Genson the correct type information so that an object of the appropriate type is created.
The problem is related to how implicits work.
Genson expects a Manifest that it will use to know to what type it must deserialize. This manifest is defined as implicit in Genson, meaning that it will try to get it from implicitly available manifests in the "caller code". However in your original version there is no Manifest[T] in DeserializeFromJson.
An alternate way would be to define the DeserializeFromJson like that (which will just produce a constructor with an implicit Manifest[T] argument):
abstract class DeserializeFromJson[T: Manifest] {
def fromJson( json : String ) : T = JSON.parser.fromJson[T](json)
}
object Foo extends DeserializeFromJson[Foo]
More generally if you don't bring more value by encapsulating a lib (in this case Genson), I think you shouldn't do that. As you basically reduce the features of Genson (now people can only work with strings) and introduce problems like the one you hit.
I think your type parameter constraint were originally wrong;
you had
trait DeserializeFromJson[T <: DeserializeFromJson[T]]
With your own answer, you fully relaxed it; you needed
trait DeserializeFromJson[T <: SerializeToJson]
...which the error was trying to tell you.
The need for the implicit Manifest (ClassTag now I believe) or context-bounds was on the money.
Would be nice for Scala to allow the specification of inheritance and type-parameter constraints based on class/trait and companion object relationship, given it is already aware, to some degree, when it comes to access-modifiers and implicit scopes.

Specifying the requirements for a generic type

I want to call a constructor of a generic type T, but I also want it to have a specific constructor with only one Int argument:
class Class1[T] {
def method1(i: Int) = {
val instance = new T(i) //ops!
i
}
}
How do I specify this requirement?
UPDATE:
How acceptable (flexible, etc) is it to use something like this? That's a template method pattern.
abstract class Class1[T] {
def creator: Int => T
def method1(i: Int) = {
val instance = creator(i) //seems ok
i
}
}
Scala doesn't allow you to specify the constructor's signature in a type constraint (as e.g. C#).
However Scala does allow you to achieve something equivalent by using the type class pattern. This is more flexible, but requires writing a bit more boilerplate code.
First, define a trait which will be an interface for creating a T given an Int.
trait Factory[T] {
def fromInt(i: Int): T
}
Then, define an implicit instance for any type you want. Let's say you have some class Foo with an appropriate constructor.
implicit val FooFactory = new Factory[Foo] {
def fromInt(i: Int) = new Foo(i)
}
Now, you can specify a context bound for the type parameter T in the signature of Class1:
class Class1[T : Factory] {
def method1(i: Int) = {
val instance = implicitly[Factory[T]].fromInt(i)
// ...
}
}
The constraint T : Factory says that there must be an implicit Factory[T] in scope. When you need to use the instance, you grab it from implicit scope using the implicitly method.
Alternatively, you could specify the factory as an implicit parameter to the method that requires it.
class Class1[T] {
def method1(i: Int)(implicit factory: Factory[T]) = {
val instance = factory.fromInt(i)
// ...
}
}
This is more flexible than putting the constraint in the class signature, because it means you could have other methods on Class1 that don't require a Factory[T]. In that case, the compiler will not enforce that there is a Factory[T] unless you call one of the methods that requires it.
In response to your update (with the abstract creator method), this is a perfectly reasonable way to do it, as long as you don't mind creating a subtype of Class1 for every T. Also note that T will need to be a concrete type at any point that you want to create an instance of Class1, because you will need to provide a concrete implementation for the abstract method.
Consider trying to create an instance of Class1 inside another generic method. When using the type class pattern, you can extend the necessary type constraint to the type signature of that method, in order to make this compile:
def instantiateClass1[T : Factory] = new Class1[T]
If you don't need to do this, then you might not need the full power of the type class pattern.
When you create a generic class or trait, the class does not gain special access to the methods of whatever actual class you might parameterise it with. When you say
class Class1[T]
You are saying
This is a class which will work with unspecified type T.
Most of its methods will take instances of type T as a parameter or return T.
Any variance annotations or type bounds attached to the type parameter will be applied whenever it appears as a parameter of one of Class1's methods.
There is no such thing as type "Class1" but there may be an arbitrary number of derived classes of type "Class1[something]"
That's all. You get no special access to T from within Class1, because Scala does not know what T is. If you wanted Class1 to have access to T's fields and methods, you should have extended it or mixed it in.
If you want access to the methods of T (without using reflection), you can only do that from within one of Class1's methods which accepts a parameter of type T. And then you will get whichever version of the method belongs to the specific type of the actual object which is passed.
(You can work around this with reflection, but that is a runtime solution and absolutely not typesafe).
Look at what you are trying to do in your original code snippet...
You have specified that Class1 can be parameterised with any arbitrary type.
You want to invoke T with a constructor which takes a single Int parameter
But what have you done to promise the Scala compiler that T will have such a constructor? Nothing at all. So how can the compiler trust this? Well, it can't.
Even if you added an upper type bound, requiring that T be a subclass of some class which does have such a constructor, that doesn't help; T might be a subclass which has a more complex constructor, which calls back to the simpler constructor. So at the point where Class1 is defined, the compiler can have no confidence about the safety of constructing T with that simple method. So that call cannot be type-safe.
Class-based OO isn't about conjuring unknown types out of the ether; it doesn't let you plunge your hand into a top-hat-shaped class loader and pull out a surprise. It allows you to handle arbitrary already-created instances of some general type without knowing their specific type. At the point where those objects are created, there's no ambiguity at all.

Initializing and using a field in an abstract generic class in Scala

I have something like this in scala:
abstract class Point[Type](n: String){
val name = n
var value: Type = _
}
So far so good. The problem comes in a class that extends Point.
case class Input[Type](n:String) extends Point(n){
def setValue(va: Type) = value = va
}
On the setValue line I have this problem:
[error] type mismatch;
[error] found : va.type (with underlying type Type)
[error] required: Nothing
[error] def setValue(va: Type) = value = va
I have tried to initialize with null and null.asInstanceOf[Type] but the result is the same.
How can I initialize value so it can be used in setValue?
You should specify that Input implements Point with the generic type Type because for now, as it is not specified, it is considered as Nothing (I guess the compiler can't infer it from the setValue method). So you have to do the following:
case class Input[Type](n:String) extends Point[Type](n){
def setValue(va: Type) = value = va
}
More information
I answered this question for the compilation error (it does compile on scala 2.9.0.1). Moreover I saw this case class as the implementation for an existing type, like 'Int'. The usage of _ is of course a bad idea in the abstract class, however it is not prohibited, but the _ is not always a null, it is the default value, for exemple: var x:Int = _ will assign the value 0 to x.
Try the following:
package inputabstraction
abstract class Point[T](n:String){
def value: T
val name = n
}
case class Input[T](n:String, value:T) extends Point[T](n)
object testTypedCaseClass{
def test(){
val foo = Input("foo", "bar")
println(foo)
}
}
A simple Application to check that it works:
import inputabstraction._
object TestApp extends Application{
testTypedCaseClass.test()
}
Explanation
The first mistake you are making is case class Input[Type](n:String) extends Point(n){. Point is a typed class, and so when you are calling the superclass constructor with extends Point(n) you need to specify the type of Point. This is done like this: extends Point[T](n), where T is the Type you are planning to use.
The second error is that you are both defining and declaring value:T here: var value: Type = _. In this statement, _ is a value. Its value is Nothing. The scala compiler infers from this that Point[T] is Point[Nothing]. Thus when you attempt to set it to a type in the body of your setValue method, you must set it to Nothing, which is probably not what you want. If you attempt to set it to anything besides Nothing, you will get the type mismatch from above, because value is typed as Nothing due to your use of _.
The third mistake is using var instead of val or def. val and def can be overridden interchangeably, which means that subtypes can override with either val or def, and the scala compiler will figure it out for you. It is best practice to define vals as functions using def in abstract classes and traits, because the initialization order of subtype constructors is a very difficult thing to get right (there is an algorithm for how the compiler decides how to construct a class from its supertypes). TL#DR === use def in supertypes. Case class parameters are automatically generate val fields, which, since you are extending a Point, will create a val value field that overrides the def value field in Point[T].
You can get away with all this Type||T abstraction in Scala because of type inference and the fact that Point is abstract, therefore making value extendable via val.
The preferred way of doing dependency injection like this is the cake pattern, but this example I have provided works for your use-case.