Scala - restricting access to generic case classes - scala

If I want a case class that cannot be manually constructed from outside a package, standard way would be something like this:
case class Foo private[p](a:A,b:B)
object Foo{
def apply(c:C) = {
require tit
require tat
Foo(c.a,c.b)
}
}
Any way to do that if the class looks like this:
case class Bar[T <: MySomething[T]] private[p](t:T)
or will I have to content myself with writing a def that takes care of the case class creation and must be explicitly called?
Edit
Seems I wasn't clear about what my problem was ...
How can I pass the required parameters to the object / apply function?

Related

Define common copy for a class hierarchy with many case classes

I would like to define a class hierarchy with about 100 case classes deriving from common base. The types are describing nodes in the AST hierarchy, like this one. I would like to do something along the lines of:
trait Base {
def doCopy: Base
}
trait CloneSelf[T <: CloneSelf[T]] extends Base {
self: T =>
def copy(): T
override def doCopy: T = copy()
}
case class CaseA(a: String) extends Base with CloneSelf[CaseA]
case class CaseB(b: Int) extends Base with CloneSelf[CaseB]
This gives an error, because the existence of my copy prevents the case classes from defining the automatic copy. Is there some way how to implement the "clone" doCopy so that is uses the automatic copy of those case classes?
I would like to define a class hierarchy with about 100 case classes deriving from common base.
Please do not do that, you should absolutely find a pattern to avoid it! If you want to do this anyway... Try ducktyping:
trait CloneSelf[T <: {def copy(): T}] {
self: T =>
override def doCopy: T = copy()
}
I cannot test now so this probably won't compile, but you can figure it out by yourself with the general idea!
Edit:
Why having 100 subclasses is evil: imagine you perform one change in the base class, for instance change its name from Base to BaseCloning -> you'll have to change it in EVERY child class (100 changes).
How you will avoid that depends on what you want to do with your classes, check creationnal and structural patterns: factory, builder, prototype, flyweight, composite... Always think about "how much work will I have if I change something in the base class? Will it affect all children?"
I have found out defining the doCopy in each case class is actually less work than defining each class to inherit from CloneSelf. The code looks like this:
trait Base {
def doCopy: Base
}
case class CaseA(a: String) extends Base {
def doCopy = copy()
}
case class CaseB(b: Int) extends Base {
def doCopy = copy()
}
I was surprised to learn that without explicit type on the overridden method the type is inferred by the compiler, therefore the static type of CaseA("a").doCopy is the same as of CaseA("a").copy(), i.e. CaseA, not Base. Adding explicit type for each case class would be probably more obvious, but this would require more work compared to just copy-pasting the same line into each of them. Not that it matters much - when I do copying via the case class type, I may use the copy() as well. It is only when I have the Base I need the doCopy, therefore declaring it like def doCopy: Base = copy() would do little harm.

Scala: force type parameter to be a case class

I have an abstract class Model from which I create case classes:
abstract class Model
case class User(.) extends Model
an abstract class Table taking such a Model as type parameter, used in one of its default concrete methods:
abstract class Table[M <: Model] {
def parser = SomeExternalBuilder[M]
}
The meaning is rather simple: "Give every instance of Table a default parser based on its own class".
The problem is that SomeExternalBuilder will only accept a case class as argument ("case class expected: M"), so it does not compile.
Can I make Table take only case classes as type parameter?
I have seen a few answers providing a missing copy method (ref1, ref2), so I tried this:
trait Model[T] {
def copy: T
}
abstract class Table[M <: Model[M]]
but now case class User extends Model[User] and must overwrite copy too, every function creating a Model takes a type parameter, and honestly the code quickly starts being atrocious, all that for that single line in Table.
Is there no better way than copying that def parser line in every child's body?
Edit: N.B. The real function is def parser: anorm.Macro.namedParser[M] from the "anorm" library for Play.
Edit: Source of the type check by this macro: https://github.com/playframework/anorm/blob/0a1b19055ba3e3749044ad8a54a6b2326235f7c8/core/src/main/scala/anorm/Macro.scala#L117
The problem is that SomeExternalBuilder will only accept a case class as argument ("case class expected: M"), so it does not compile.
I don't think you can ever get such a message from Scala compiler itself, which means that SomeExternalBuilder.apply is a macro. It requires a specific case class in order to know its fields, so that it doesn't matter if you could limit M to be a case class (which you can't): it still wouldn't accept a type parameter.
What you can do is create a macro annotation, so that when you write e.g.
#HasModel
class SomeTable extends Table[SomeModel] {
...
}
the val parser = namedParser[SomeModel] is generated automatically.
Alternately, write #HasModel[SomeModel] class SomeTable { ... } and generate extends Table[SomeModel] as well.
It wouldn't be hard (as macros go), but you still need to annotate each class extending Table.
Not fool proof solution but worth a try
case classes extend Product and Serialisable. Constraint Product with Serialisable will help you get some type safety. M can be any class which extends Product with Serialisable. But Product is extended by case class mostly
abstract class Table[M <: (Product with Serializable)] {
def parser = SomeExternalBuilder[M]
}

Using value classes in scala to implement trait methods?

I have a trait that defines a function--I don't want to specify how it will work until later. This trait is mixed in with several case classes, like so:
trait AnItem
trait DataFormatable {
def render():String = "" // dummy implementation
}
case class Person(name:String, age:Int) extends DataFormatable with AnItem
case class Building(numFloors:Int) extends DataFormatable with AnItem
Ok, so now I want includable modules that pimp specific implementations of this render behavior. Trying to use value classes here:
object JSON {
implicit class PersonRender( val p:Person ) extends AnyVal {
def render():String = {
//render json
}
}
// others
}
object XML {
implicit class PersonRender( val p:Person ) extends AnyVal {
def render():String = {
//render xml
}
}
// others
}
The ideal use would look like this (presuming JSON output desired):
import JSON._
val p:AnItem = Person("John",24)
println(p.render())
All cool--but it doesn't work. Is there a way I can make this loadable-implementation thing work? Am I close?
The DataFormatable trait is doing nothing here but holding you back. You should just get rid of it. Since you want to swap out render implementations based on the existence of implicits in scope, Person can't have it's own render method. The compiler will only look for an implicit conversion to PersonRender if Person doesn't have a method named render in the first place. But because Person inherits (or is forced to implement) render from DataFormatable, there is no need to look for the implicit conversion.
Based on your edit, if you have a collection of List[AnItem], it is also not possible to implicitly convert the elements to have render. While each of the sub-classes may have an implicit conversion that gives them render, the compiler doesn't know that when they are all piled into a list of a more abstract type. Particularly an empty trait such as AnItem.
How can you make this work? You have two simple options.
One, if you want to stick with the implicit conversions, you need to remove DataFormatable as the super-type of your case classes, so that they do not have their own render method. Then you can swap out XML._ and JSON._, and the conversions should work. However, you won't be allowed mixed collections.
Two, drop the implicits altogether and have your trait look like this:
trait DataFormatable {
def toXML: String
def toJSON: String
}
This way, you force every class that mixes in DataFormatable to contain serialization information (which is the way it should be, rather than hiding them in implicits). Now, when you have a List[DataFormatable], you can prove all of the elements can both be converted to JSON or XML, so you can convert a mixed list. I think this would be much better overall, as the code should be more straightforward. What imports you have shouldn't really be defining the behavior of what follows. Imagine the confusion that can arise because XML._ has been imported at the top of the file instead of JSON._.

Is it possible to mix in a trait using apply?

I have a trait, call it foo. I have a class, call it MyClass:
class MyClass(something: String) { ... }
I can do this:
val myStuff = new MyClass("hello") with foo
What I'd like to do, however, is have my class with a companion object and use apply to create an instance:
class MyClass(something: String) { ... }
object MyClass {
def apply(something: String) = {
new MyClass(something)
}
}
The problem is that the following code isn't valid:
val myStuff = MyClass("hello") with foo
This would use the apply to make an instance of MyClass, but the trait can't be used. Is there a way to do this?
My goal is in test - the trait (foo) contains overrides for a few things in MyClass that lets me use the trait as a data mock. It's precisely what I need, but I'd like to be able to do that without doing a "new." And since I want to mix this into my test code but not in production, putting the "with foo" inside of apply in the companion object isn't an option.
Can it be done?
No it can't be done with a companion object.
apply is like a place holder.
When you do MyClass("hello") with Foo is equivalent to apply("hello") with Foo
apply gets executed first in case of companion object. With class construct its different

How to design immutable model classes when using inheritance

I'm having trouble finding an elegant way of designing a some simple classes to represent HTTP messages in Scala.
Say I have something like this:
abstract class HttpMessage(headers: List[String]) {
def addHeader(header: String) = ???
}
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers)
new HttpRequest("/", List("foo")).addHeader("bar")
How can I make the addHeader method return a copy of itself with the new header added? (and keep the current value of path as well)
Thanks,
Rob.
It is annoying but the solution to implement your required pattern is not trivial.
The first point to notice is that if you want to preserve your subclass type, you need to add a type parameter. Without this, you are not able to specify an unknown return type in HttpMessage
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String):X
}
Then you can implement the method in your concrete subclasses where you will have to specify the value of X:
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers){
type X = HttpRequest
def addHeader(header: String):HttpRequest = new HttpRequest(path, headers :+header)
}
A better, more scalable solution is to use implicit for the purpose.
trait HeaderAdder[T<:HttpMessage]{
def addHeader(httpMessage:T, header:String):T
}
and now you can define your method on the HttpMessage class like the following:
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String)(implicit headerAdder:HeaderAdder[X]):X = headerAdder.add(this,header) }
}
This latest approach is based on the typeclass concept and scales much better than inheritance. The idea is that you are not forced to have a valid HeaderAdder[T] for every T in your hierarchy, and if you try to call the method on a class for which no implicit is available in scope, you will get a compile time error.
This is great, because it prevents you to have to implement addHeader = sys.error("This is not supported")
for certain classes in the hierarchy when it becomes "dirty" or to refactor it to avoid it becomes "dirty".
The best way to manage implicit is to put them in a trait like the following:
trait HeaderAdders {
implicit val httpRequestHeaderAdder:HeaderAdder[HttpRequest] = new HeaderAdder[HttpRequest] { ... }
implicit val httpRequestHeaderAdder:HeaderAdder[HttpWhat] = new HeaderAdder[HttpWhat] { ... }
}
and then you provide also an object, in case user can't mix it (for example if you have frameworks that investigate through reflection properties of the object, you don't want extra properties to be added to your current instance) (http://www.artima.com/scalazine/articles/selfless_trait_pattern.html)
object HeaderAdders extends HeaderAdders
So for example you can write things such as
// mixing example
class MyTest extends HeaderAdders // who cares about having two extra value in the object
// import example
import HeaderAdders._
class MyDomainClass // implicits are in scope, but not mixed inside MyDomainClass, so reflection from Hiberante will still work correctly
By the way, this design problem is the same of Scala collections, with the only difference that your HttpMessage is TraversableLike. Have a look to this question Calling map on a parallel collection via a reference to an ancestor type