General style question.
As I become better at writing functional code, more of my methods are becoming pure functions. I find that lots of my "classes" (in the loose sense of a container of code) are becoming state free. Therefore I make them objects instead of classes as there is no need to instantiate them.
Now in the Java world, having a class full of "static" methods would seem rather odd, and is generally only used for "helper" classes, like you see with Guava and Commons-* and so on.
So my question is, in the Scala world, is having lots of logic inside "objects" and not "classes" quite normal, or is there another preferred idiom.
As you mention in your title, objects are singleton classes, not classes with static methods as you mention in the text of your question.
And there are a few things that make scala objects better than both static AND singletons in java-world, so it is quite "normal" to use them in scala.
For one thing, unlike static methods, object methods are polymorphic, so you can easily inject objects as dependencies:
scala> trait Quack {def quack="quack"}
defined trait Quack
scala> class Duck extends Quack
defined class Duck
scala> object Quacker extends Quack {override def quack="QUAACK"}
defined module Quacker
// MakeItQuack expects something implementing Quack
scala> def MakeItQuack(q: Quack) = q.quack
MakeItQuack: (q: Quack)java.lang.String
// ...it can be a class
scala> MakeItQuack(new Duck)
res0: java.lang.String = quack
// ...or it can be an object
scala> MakeItQuack(Quacker)
res1: java.lang.String = QUAACK
This makes them usable without tight coupling and without promoting global state (which are two of the issues generally attributed to both static methods and singletons).
Then there's the fact that they do away with all the boilerplate that makes singletons so ugly and unidiomatic-looking in java. This is an often overlooked point, in my opinion, and part of what makes singletons so frowned upon in java even when they are stateless and not used as global state.
Also, the boilerplate you have to repeat in all java singletons gives the class two responsibilities: ensuring there's only one instance of itself and doing whatever it's supposed to do. The fact that scala has a declarative way of specifying that something is a singleton relieves the class and the programmer from breaking the single responsibility principle. In scala you know an object is a singleton and you can just reason about what it does.
You can also use package objects e.g. take a look at the scala.math package object here
https://lampsvn.epfl.ch/trac/scala/browser/scala/tags/R_2_9_1_final/src//library/scala/math/package.scala
Yes, I would say it is normal.
For most of my classes I create a companion object to handle some initialization/validation logic there. For example instead of throwing an exception if validation of parameters fails in a constructor it is possible to return an Option or an Either in the companion objects apply-method:
class X(val x: Int) {
require(x >= 0)
}
// ==>
object X {
def apply(x: Int): Option[X] =
if (x < 0) None else Some(new X(x))
}
class X private (val x: Int)
In the companion object one can add a lot of additional logic, such as a cache for immutable objects.
objects are also good for sending signals between instances if there is no need to also send messages:
object X {
def doSomething(s: String) = ???
}
case class C(s: String)
class A extends Actor {
var calculateLater: String = ""
def receive = {
case X => X.doSomething(s)
case C(s) => calculateLater = s
}
}
Another use case for objects is to reduce the scope of elements:
// traits with lots of members
trait A
trait B
trait C
trait Trait {
def validate(s: String) = {
import validator._
// use logic of validator
}
private object validator extends A with B with C {
// all members of A, B and C are visible here
}
}
class Class extends Trait {
// no unnecessary members and name conflicts here
}
Related
I'm writing a set of implicit Scala wrapper classes for an existing Java library (so that I can decorate that library to make it more convenient for Scala developers).
As a trivial example, let's say that the Java library (which I can't modify) has a class such as the following:
public class Value<T> {
// Etc.
public void setValue(T newValue) {...}
public T getValue() {...}
}
Now let's say I want to decorate this class with Scala-style getters and setters. I can do this with the following implicit class:
final implicit class RichValue[T](private val v: Value[T])
extends AnyVal {
// Etc.
def value: T = v.getValue
def value_=(newValue: T): Unit = v.setValue(newValue)
}
The implicit keyword tells the Scala compiler that it can convert instances of Value to be instances of RichValue implicitly (provided that the latter is in scope). So now I can apply methods defined within RichValue to instances of Value. For example:
def increment(v: Value[Int]): Unit = {
v.value = v.value + 1
}
(Agreed, this isn't very nice code, and is not exactly functional. I'm just trying to demonstrate a simple use case.)
Unfortunately, Scala does not allow implicit classes to be top-level, so they must be defined within a package object, object, class or trait and not just in a package. (I have no idea why this restriction is necessary, but I assume it's for compatibility with implicit conversion functions.)
However, I'm also extending RichValue from AnyVal to make this a value class. If you're not familiar with them, they allow the Scala compiler to make allocation optimizations. Specifically, the compiler does not always need to create instances of RichValue, and can operate directly on the value class's constructor argument.
In other words, there's very little performance overhead from using a Scala implicit value class as a wrapper, which is nice. :-)
However, a major restriction of value classes is that they cannot be defined within a class or a trait; they can only be members of packages, package objects or objects. (This is so that they do not need to maintain a pointer to the outer class instance.)
An implicit value class must honor both sets of constraints, so it can only be defined within a package object or an object.
And therein lies the problem. The library I'm wrapping contains a deep hierarchy of packages with a huge number of classes and interfaces. Ideally, I want to be able to import my wrapper classes with a single import statement, such as:
import mylib.implicits._
to make using them as simple as possible.
The only way I can currently see of achieving this is to put all of my implicit value class definitions inside a single package object (or object) within a single source file:
package mylib
package object implicits {
implicit final class RichValue[T](private val v: Value[T])
extends AnyVal {
// ...
}
// Etc. with hundreds of other such classes.
}
However, that's far from ideal, and I would prefer to mirror the package structure of the target library, yet still bring everything into scope via a single import statement.
Is there a straightforward way of achieving this that doesn't sacrifice any of the benefits of this approach?
(For example, I know that if I forego making these wrappers value classes, then I can define them within a number of different traits - one for each component package - and have my root package object extend all of them, bringing everything into scope through a single import, but I don't want to sacrifice performance for convenience.)
implicit final class RichValue[T](private val v: Value[T]) extends AnyVal
Is essentially a syntax sugar for the following two definitions
import scala.language.implicitConversions // or use a compiler flag
final class RichValue[T](private val v: Value[T]) extends AnyVal
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
(which, you might see, is why implicit classes have to be inside traits, objects or classes: they also have matching def)
There is nothing that requires those two definitions to live together. You can put them into separate objects:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// lots of implementation code here
}
}
object implicits {
#inline implicit def RichValue[T](v: Value[T]): wrappedLibValues.RichValue[T] = new wrappedLibValues.RichValue(v)
}
Or into traits:
object wrappedLibValues {
final class RichValue[T](private val v: Value[T]) extends AnyVal {
// implementation here
}
trait Conversions {
#inline implicit def RichValue[T](v: Value[T]): RichValue[T] = new RichValue(v)
}
}
object implicits extends wrappedLibValues.Conversions
I have the following setup:
trait A
{
def doSomething(): Unit;
}
object B extends A
{
override def doSomething(): Unit =
{
// Implementation
}
}
class B(creator: String) extends A
{
override def doSomething(): Unit =
{
B.doSomething() // Now this is just completely unnecessary, but the compiler of course insists upon implementing the method
}
}
Now you may wonder why I even do this, why I let the class extend the trait as well.
The problem is, that somewhere in the Program there is a Collection of A.
So somewhere:
private val aList: ListBuffer[A] = new ListBuffer[A]
and in there, I also have to put Bs (among other derivates, namely C and D)
So I can't just let the B-class not extend it.
As the implementation is the same for all instances, I want to use an Object.
But there is also a reason I really need this Object. Because there is a class:
abstract class Worker
{
def getAType(): A
def do(): Unit =
{
getAType().doSomething()
}
}
class WorkerA
{
def getAType(): A =
{
return B
}
}
Here the singleton/object of B gets returned. This is needed for the implementation of do() in the Worker.
To summarize:
The object B is needed because of the generic implementation in do() (Worker-Class) and also because doSomething() never changes.
The class B is needed because in the collection of the BaseType A there are different instances of B with different authors.
As both the object and the class have to implement the trait for above reasons I'm in kind of a dilemma here. I couldn't find a satisfying solution that looks neater.
So, my question is (It turns out as a non-native-speaker I should've clarified this more)
Is there any way to let a class extend a trait (or class) and say that any abstract-method implementation should be looked up in the object instead of the class, so that I must only implement "doSomething()" (from the trait) once (in the object)? As I said, the trait fulfills two different tasks here.
One being a BaseType so that the collection can get instances of the class. The other being a contract to ensure the doSomething()-method is there in every object.
So the Object B needs to extend the trait, because a trait is like a Java interface and every (!) Object B (or C, or D) needs to have that method. (So the only option I see -> define an interface/trait and make sure the method is there)
edit: In case anyone wonders. How I really solved the problem: I implemented two traits.
Now for one class (where I need it) I extend both and for the other I only extend one. So I actually never have to implement any method that is not absolutely necessary :)
As I wrote in the comment section, it's really unclear to me what you're asking.
However, looking at your code examples, it seems to me that trait A isn't really required.
You can use the types that already come with the Scala SDK:
object B extends (()=>Unit) {
def apply() { /* implementation */ }
}
Or, as a variant:
object B {
val aType:()=>Unit = {() => /* implementation */ }
}
In the first case, you can access the singleton instance with B, in the second case with B.aType.
In the second case, no explicit declaration of the apply method is needed.
Pick what you like.
The essential message is: You don't need a trait if you just define one simple method.
That's what Scala functions are for.
The list type might look like this:
private val aList:ListBuffer[()=>Unit] = ???
(By the way: Why not declare it as Seq[()=>Unit]? Is it important to the caller that it is a ListBuffer and not some other kind of sequence?)
Your worker might then look like this:
abstract class Worker {
def aType:()=>Unit // no need for the `get` prefix here, or the empty parameter list
def do() {aType()}
}
Note that now the Worker type has become a class that offers a method that invokes a function.
So, there is really no need to have a Worker class.
You can just take the function (aType) directly and invoke it, just so.
If you always want to call the implementation in object B, well - just do that then.
There is no need to wrap the call in instances of other types.
Your example class B just forwards the call to the B object, which is really unnecessary.
There is no need to even create an instance of B.
It does have the private member variable creator, but since it's never used, it will never be accessed in any way.
So, I would recommend to completely remove the class B.
All you need is the type ()=>Unit, which is exactly what you need: A function that takes no parameters and returns nothing.
If you get tired of writing ()=>Unit all the time, you can define a type alias, for example inside the package object.
Here is my recommentation:
type SideEffect = ()=>Unit
Then you can use SideEffect as an alias for ()=>Unit.
That's all I can make of it.
It looks to me that this is probably not what you were looking for.
But maybe this will help you a little bit along the way.
If you want to have a more concrete answer, it would be nice if you would clarify the question.
object B doesn't really have much to do with class B aside from some special rules.
If you wish to reuse that doSomething method you should just reuse the implementation from the object:
class B {
def doSomething() = B.doSomething()
}
If you want to specify object B as a specific instance of class B then you should do the following:
object B extends B("some particular creator") {
...
}
You also do not need override modifiers although they can be handy for compiler checks.
The notion of a companion object extending a trait is useful for defining behavior associated with the class itself (e.g. static methods) as opposed to instances of the class. In other words, it allows your static methods to implement interfaces. Here's an example:
import java.nio.ByteBuffer
// a trait to be implemented by the companion object of a class
// to convey the fixed size of any instance of that class
trait Sized { def size: Int }
// create a buffer based on the size information provided by the
// companion object
def createBuffer(sized: Sized): ByteBuffer = ByteBuffer.allocate(sized.size)
class MyClass(x: Long) {
def writeTo(buffer: ByteBuffer) { buffer.putLong(x) }
}
object MyClass extends Sized {
def size = java.lang.Long.SIZE / java.lang.Byte.SIZE
}
// create a buffer with correct sizing for MyClass whose companion
// object implements Sized. Note that we don't need an instance
// of MyClass to obtain sizing information.
val buf = createBuffer(MyClass)
// write an instance of MyClass to the buffer.
val c = new MyClass(42)
c.writeTo(buf)
In my specific case I have a (growing) library of case classes with a base trait (TKModel)
Then I have an abstract class (TKModelFactory[T <: TKModel]) which is extended by all companion objects.
So my companion objects all inherently know the type ('T') of "answers" they need to provide as well as the type of objects they "normally" accept for commonly implemented methods. (If I get lazy and cut and paste chunks of code to search and destroy this save my bacon a lot!) I do see warnings on the Internet at large however that any form of CompanionObject.method(caseClassInstance: CaseClass) is rife with "code smell" however. Not sure if they actually apply to Scala or not?
There does not however seem to be any way to declare anything in the abstract case class (TKModel) that would refer to (at runtime) the proper companion object for a particular instance of a case class. This results in my having to write (and edit) a few method calls that I want standard in each and every case class.
case class Track(id: Long, name: String, statusID: Long) extends TKModel
object Track extends TKModelFactory[Track]
How would I write something in TKModel such that new Track(1, "x", 1).someMethod() could actually call Track.objectMethod()
Yes I can write val CO = MyCompanionObject along with something like implicit val CO: ??? in the TKModel abstract class and make all the calls hang off of that value. Trying to find any incantation that makes the compiler happy for that however seems to be mission impossible. And since I can't declare that I can't reference it in any placeholder methods in the abstract class either.
Is there a more elegant way to simply get a reference to a case classes companion object?
My specific question, as the above has been asked before (but not yet answered it seems), is there a way to handle the inheritance of both the companion object and the case classes and find the reference such that I can code common method calls in the abstract class?
Or is there a completely different and better model?
If you change TKModel a bit, you can do
abstract class TKModel[T <: TKModel] {
...
def companion: TKModelFactory[T]
def someMethod() = companion.objectMethod()
}
case class Track(id: Long, name: String, statusID: Long) extends TKModel[Track] {
def companion = Track
}
object Track extends TKModelFactory[Track] {
def objectMethod() = ...
}
This way you do need to implement companion in each class. You can avoid this by implementing companion using reflection, something like (untested)
lazy val companion: TKModelFactory[T] = {
Class.forName(getClass.getName + "$").getField("MODULE$").
get(null).asInstanceOf[TKModelFactory[T]]
}
val is to avoid repeated reflection calls.
A companion object does not have access to the instance, but there is no reason the case class can't have a method that calls the companion object.
case class Data(value: Int) {
def add(data: Data) = Data.add(this,data)
}
object Data {
def add(d1: Data, d2: Data): Data = Data(d1.value + d2.value)
}
It's difficult. However you can create an implicit method in companion object. whenever you want to invoke your logic from instance, just trigger implicit rules and the implicit method will instantiate another class which will invoke whatever logic you desired.
I believe it's also possible to do this in generic ways.
You can implement this syntax as an extension method by defining an implicit class in the top-level abstract class that the companion objects extend:
abstract class TKModelFactory[T <: TKModel] {
def objectMethod(t: T)
implicit class Syntax(t: T) {
def someMethod() = objectMethod(t)
}
}
A call to new Track(1, "x", 1).someMethod() will then be equivalent to Track.objectMethod(new Track(1, "x", 1)).
I'm having trouble finding an elegant way of designing a some simple classes to represent HTTP messages in Scala.
Say I have something like this:
abstract class HttpMessage(headers: List[String]) {
def addHeader(header: String) = ???
}
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers)
new HttpRequest("/", List("foo")).addHeader("bar")
How can I make the addHeader method return a copy of itself with the new header added? (and keep the current value of path as well)
Thanks,
Rob.
It is annoying but the solution to implement your required pattern is not trivial.
The first point to notice is that if you want to preserve your subclass type, you need to add a type parameter. Without this, you are not able to specify an unknown return type in HttpMessage
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String):X
}
Then you can implement the method in your concrete subclasses where you will have to specify the value of X:
class HttpRequest(path: String, headers: List[String])
extends HttpMessage(headers){
type X = HttpRequest
def addHeader(header: String):HttpRequest = new HttpRequest(path, headers :+header)
}
A better, more scalable solution is to use implicit for the purpose.
trait HeaderAdder[T<:HttpMessage]{
def addHeader(httpMessage:T, header:String):T
}
and now you can define your method on the HttpMessage class like the following:
abstract class HttpMessage(headers: List[String]) {
type X <: HttpMessage
def addHeader(header: String)(implicit headerAdder:HeaderAdder[X]):X = headerAdder.add(this,header) }
}
This latest approach is based on the typeclass concept and scales much better than inheritance. The idea is that you are not forced to have a valid HeaderAdder[T] for every T in your hierarchy, and if you try to call the method on a class for which no implicit is available in scope, you will get a compile time error.
This is great, because it prevents you to have to implement addHeader = sys.error("This is not supported")
for certain classes in the hierarchy when it becomes "dirty" or to refactor it to avoid it becomes "dirty".
The best way to manage implicit is to put them in a trait like the following:
trait HeaderAdders {
implicit val httpRequestHeaderAdder:HeaderAdder[HttpRequest] = new HeaderAdder[HttpRequest] { ... }
implicit val httpRequestHeaderAdder:HeaderAdder[HttpWhat] = new HeaderAdder[HttpWhat] { ... }
}
and then you provide also an object, in case user can't mix it (for example if you have frameworks that investigate through reflection properties of the object, you don't want extra properties to be added to your current instance) (http://www.artima.com/scalazine/articles/selfless_trait_pattern.html)
object HeaderAdders extends HeaderAdders
So for example you can write things such as
// mixing example
class MyTest extends HeaderAdders // who cares about having two extra value in the object
// import example
import HeaderAdders._
class MyDomainClass // implicits are in scope, but not mixed inside MyDomainClass, so reflection from Hiberante will still work correctly
By the way, this design problem is the same of Scala collections, with the only difference that your HttpMessage is TraversableLike. Have a look to this question Calling map on a parallel collection via a reference to an ancestor type
I'm trying to figure out how to .clone my own objects, in Scala.
This is for a simulation so mutable state is a must, and from that arises the whole need for cloning. I'll clone a whole state structure before moving the simulation time ahead.
This is my current try:
abstract trait Cloneable[A] {
// Seems we cannot declare the prototype of a copy constructor
//protected def this(o: A) // to be defined by the class itself
def myClone= new A(this)
}
class S(var x: String) extends Cloneable[S] {
def this(o:S)= this(o.x) // for 'Cloneable'
def toString= x
}
object TestX {
val s1= new S("say, aaa")
println( s1.myClone )
}
a. Why does the above not compile. Gives:
error: class type required but A found
def myClone= new A(this)
^
b. Is there a way to declare the copy constructor (def this(o:A)) in the trait, so that classes using the trait would be shown to need to provide one.
c. Is there any benefit from saying abstract trait?
Finally, is there a way better, standard solution for all this?
I've looked into Java cloning. Does not seem to be for this. Also Scala copy is not - it's only for case classes and they shouldn't have mutable state.
Thanks for help and any opinions.
Traits can't define constructors (and I don't think abstract has any effect on a trait).
Is there any reason it needs to use a copy constructor rather than just implementing a clone method? It might be possible to get out of having to declare the [A] type on the class, but I've at least declared a self type so the compiler will make sure that the type matches the class.
trait DeepCloneable[A] { self: A =>
def deepClone: A
}
class Egg(size: Int) extends DeepCloneable[Egg] {
def deepClone = new Egg(size)
}
object Main extends App {
val e = new Egg(3)
println(e)
println(e.deepClone)
}
http://ideone.com/CS9HTW
It would suggest a typeclass based approach. With this it is possible to also let existing classes be cloneable:
class Foo(var x: Int)
trait Copyable[A] {
def copy(a: A): A
}
implicit object FooCloneable extends Copyable[Foo] {
def copy(foo: Foo) = new Foo(foo.x)
}
implicit def any2Copyable[A: Copyable](a: A) = new {
def copy = implicitly[Copyable[A]].copy(a)
}
scala> val x = new Foo(2)
x: Foo = Foo#8d86328
scala> val y = x.copy
y: Foo = Foo#245e7588
scala> x eq y
res2: Boolean = false
a. When you define a type parameter like the A it gets erased after the compilation phase.
This means that the compiler uses type parameters to check that you use the correct types, but the resulting bytecode retains no information of A.
This also implies that you cannot use A as a real class in code but only as a "type reference", because at runtime this information is lost.
b & c. traits cannot define constructor parameters or auxiliary constructors by definition, they're also abstract by definition.
What you can do is define a trait body that gets called upon instantiation of the concrete implementation
One alternative solution is to define a Cloneable typeclass. For more on this you can find lots of blogs on the subject, but I have no suggestion for a specific one.
scalaz has a huge part built using this pattern, maybe you can find inspiration there: you can look at Order, Equal or Show to get the gist of it.