What does it mean when it is said that scala provides for first class module support through the object syntax? Nothing in the glossary even mentions the phrase but I've run into it twice now and haven't been able to decipher it. Is was said in this blog post regarding adapters.
A "module" is a pluggable piece of software, also called a "package" sometimes. It provides a functionality through a well-defined set of interfaces, which declare what it provides and what it requires. Finally, it is interchangeable.
There are few languages with direct support for modules, mostly because while support for declaring APIs is common, support for declaring dependencies or requirements is not. Libraries in popular languages will usually interface by relying on types provided by the "standard" library, or requiring initialization with objects implementing APIs they provide.
So, if I want to make a benchmark module, I'll usually resort to clock facilities provided by the standard libraries or, at worse, I'll declare a clock type and request to be initialized with a class implementing it before the module's functionality is ready to be used.
When module support is available, however, I'll not only declare the benchmark interfaces I provide, but I'll also declare that I need a "clock module" -- a module exporting certain interfaces that I need.
A client of my module would not be required to do anything to use my interfaces -- it could just go ahead and use it. Or it could not even declare that my benchmark module would be used and, instead, declare that it has a requirement for a benchmark module.
What will satisfy the requirements is only decided at the "top" level, at the application level (modules are components of applications). At that point it would declare that it would use that client, my benchmark, and a module implementing my clock requirements.
If you know Guice, this might seem familiar. The problems that Guice addresses are in large part caused by the lack of module support in the Java programming language.
So, back to Scala. How does module support work? Well, the interface of my module might look like this:
trait Benchmark extends Clock // What I need {
// What I provide
type ABench <: Bench
trait Bench {
def measure(task: => Unit): Long
}
def aBench: ABench
}
and Clock would be a module definition, such as this:
trait Clock {
// What it provides
type AClock <: Clock
trait Clock {
def now(): Long
}
def aClock: AClock
}
My module itself might look like this:
trait MyModule extends Benchmark {
class ABench extends Bench {
def measure(task: => Unit): Long = {
val measurements = for(_ <- 1 to 10) yield {
val start = aClock.now()
task
val end = aClock.now()
end - start
}
measurements / 10
}
}
object aBench extends ABench
}
The Clock module would be similarly defined. An application could be declared as being a composition of modules:
trait application extends Clock with Benchmark with ...
though, of course, dependencies need not be declared, as they are already provided for. You can then combine modules that provide the requirements to build the application:
object Application extends MyModule with JavaClock with ...
And that would link the requirements of MyModule with the implementation provided by JavaClock. The stuff above still require some shared knowledge, because "Clock" is likely, an API that I provide. One can write a proxy, of course, but it's not plug and play. Scala can go a bit further, if I declared my Benchmark module like this:
trait Benchmark {
type Clock = {
def now(): Long
}
def aClock: Clock
// What I provide
type ABench <: Bench
trait Bench {
def measure(task: => Unit): Long
}
def aBench: ABench
}
Now any class that offers a now(): Long method can be used to satisfy the requirement, without any bridge. Of course, if the name of the methods is "millis(): Long" instead of "now(): Long", I'm still screwed, and that kind of "binding" is something that languages providing module support might address, though not Scala. Also, due to how JVM works, there's a performance penalty there as well.
So, that's the module, and the module support. Finally first class module. First class support for X means X can be manipulated as values. For example, Scala has first class support for functions, which means I can pass a function to a method, store it in a variable, in a map, etc.
The first class support for modules, is basically instantiation, though one can use an "object" to create a singleton of that module, and then pass that around (the advantages of which I discuss further below). So, I can do this:
object myBenchmark extends MyModule with JVMClock
and pass myBenchmark as a parameter to methods that need such a module.
There are two elements in Scala that make all of that work: abstract types and path dependent types.
Abstract types, the "type" declarations, make it possible for a piece of code to declare it will be using a type X, which is not going to be defined by who calls or instantiates it, but, rather, at the moment modules get composed.
Path dependent types make it possible to work with modules without being completely unsafe, but without being so restrictive as to not allow anything. Let's say I do this:
val b: MyModule.Clock = MyModule.aClock
And let's say I have a method on Benchmark that took a Clock as a parameter. I can call that method on MyModule passing b as a parameter, because Scala knows that the Clock of b is the clock bound to MyModule. And if I try to pass b to another module implementing Benchmark, Scala would not let me do that. That is, I can get values out of Benchmark that are specific to abstract types of Benchmark -- unknown to all but the module implementations -- and feed it back to that Benchmark but not other Benchmark implementations.
Related
I am trying to understand why one would use untyped actors over typed actors.
I have read several posts on this, some of them below:
What is the difference between Typed and UnTyped Actors in Akka? When to use what?
http://letitcrash.com/post/19074284309/when-to-use-typedactors
I am interested in understanding why untyped actors are better in the context of:
a web server,
A distributed architecture
Scalability,
Interoperability with applications written in other programming
languages.
I am aware, that untyped actors are better in the context of FSM because of the become/unbecome functionality.
I can see the possibilities of untyped in a load balancer, as it does not have to be aware of the contents of the messages, but just forward them to other actors. However this could be implemented in a typedactor as well.
Can someone come up with a few use case in the areas mentioned above, where untyped actors are "better"?
There is a generic disadvantage for type actors: they are hard to extend. When you use normal traits you can easily combine them to build object that implements both interfaces
trait One {
def callOne(arg : String)
}
trait Two {
def callTwo(arg : Double)
}
trait Both extends One with Two
The Both trait supports two calls combined from two traits.
If you usage actor approach that process messages instead of making direct calls you is still capable with extending interfaces refusing type safety as price.
trait One {
val receiveOne : PartialFunction[String,Unit] = {
case msg : String => ()
}
}
trait Two {
val receiveTwo : PartialFunction[Double, Unit] = {
case msg : Double => ()
}
}
trait Both extends One with Two {
val receive : PartialFunction[Any, Unit] = receiveOne orElse receiveTwo
}
The receive value in Both trait combines two partial functions. The first accepts only Strings, the second - only Doubles. They have single common supertype: Any. So extended version should use Any as argument and becomes effectively untyped. The flaw is in scala type system that supports type multiplication using with keyword, but does not support union types. You could not define Double or String.
Typed actors lose ability for easy extension. Actors shifts type checks to contravariant position and extending it requires union types. You can see how they works in ceylon programming language.
It is not that untyped and typed actors have different sphere of application. All questioned functionality may be expressed in terms of both. The choice is more about methodology and convenience.
Typing allows you to avoid some errors before going to unit testing. It will cost boilerplate for auxiliary protocol declarations. In the example above you should declare union type explicitly:
trait Protocol
final case class First(message : String) extends Protocol
final case class Second(message : Double) extends Protocol
And you lose easy callback combination: no orElse method for you. Only hand-written
val receive : PartialFunction[Protocol, Unit] = {
case First(msg) => receiveOne(msg)
case Second(msg) => receiveTwo(msg)
}
And if you would like to add a bit of new functionality with trait Three then you would be busy with rewriting that boilerplate code.
Akka provides some useful predefined enhancements for actors. They add new functionality either by mixin (e.g. receive pipeline) or by delegating (e.g. reliable proxy). Proxy patterns are used pretty much in akka applications and they change protocol on the fly, adding control command to it. That could not be done that easily with typed actors. So instead of predefined utilities you would be forced to write you own implementations. And forsaken utilities would not be limited with FSM.
It is up to you decide whether typing improvement worth increased work. No one can give precise advise without deep understanding of your project.
Typed actors are very new; they're explicitly marked as experimental and not ready for production use.
Warning
This module is currently experimental in the sense of being the subject of active research. This means that API or semantics can change without warning or deprecation period and it is not recommended to use this module in production just yet—you have been warned.
(as of the time this is written)
I'd like to point out a confusion that seems to have surfaced here.
Casper, the "typed actors" you refer to are deprecated and will be even removed eventually, I have explained in much detail why that's the case here: Akka typed actors in Java. The link you found with Viktor Klang answering, is talking about Akka 1.2 which is "ancient" as of now (when 2.4 is the stable release).
Having that said, there is a new experimental module called "Akka Typed", to which Daenyth is referring to in his reply. That module may indeed become a new core abstraction, however it's not yet ready for prime time.
I recommend you give the typed modules: Akka Streams (the latest addition to Akka, which will become not experimental very soon) and
Akka Typed to see how Actors may become typed in the near future (perhaps). Then, actually look again at Actors and see which model best works for your use case. Untyped Actors have the advantage of being a true and tried mature module / model, so you can really trust them in that sense, if you want more types - Akka Streams has you covered in many cases, but not all, so then you may consider the experimental module (but be aware, we most likely will change the Typed API while maturing it).
How do I create a properly functional configurable object in Scala? I have watched Tony Morris' video on the Reader monad and I'm still unable to connect the dots.
I have a hard-coded list of Client objects:
class Client(name : String, age : Int){ /* etc */}
object Client{
//Horrible!
val clients = List(Client("Bob", 20), Client("Cindy", 30))
}
I want Client.clients to be determined at runtime, with the flexibility of either reading it from a properties file or from a database. In the Java world I'd define an interface, implement the two types of source, and use DI to assign a class variable:
trait ConfigSource {
def clients : List[Client]
}
object ConfigFileSource extends ConfigSource {
override def clients = buildClientsFromProperties(Properties("clients.properties"))
//...etc, read properties files
}
object DatabaseSource extends ConfigSource { /* etc */ }
object Client {
#Resource("configuration_source")
private var config : ConfigSource = _ //Inject it at runtime
val clients = config.clients
}
This seems like a pretty clean solution to me (not a lot of code, clear intent), but that var does jump out (OTOH, it doesn't seem to me really troublesome, since I know it will be injected once-and-only-once).
What would the Reader monad look like in this situation and, explain it to me like I'm 5, what are its advantages?
Let's start with a simple, superficial difference between your approach and the Reader approach, which is that you no longer need to hang onto config anywhere at all. Let's say you define the following vaguely clever type synonym:
type Configured[A] = ConfigSource => A
Now, if I ever need a ConfigSource for some function, say a function that gets the n'th client in the list, I can declare that function as "configured":
def nthClient(n: Int): Configured[Client] = {
config => config.clients(n)
}
So we're essentially pulling a config out of thin air, any time we need one! Smells like dependency injection, right? Now let's say we want the ages of the first, second and third clients in the list (assuming they exist):
def ages: Configured[(Int, Int, Int)] =
for {
a0 <- nthClient(0)
a1 <- nthClient(1)
a2 <- nthClient(2)
} yield (a0.age, a1.age, a2.age)
For this, of course, you need some appropriate definition of map and flatMap. I won't get into that here, but will simply say that Scalaz (or Rúnar's awesome NEScala talk, or Tony's which you've seen already) gives you all you need.
The important point here is that the ConfigSource dependency and its so-called injection are mostly hidden. The only "hint" that we can see here is that ages is of type Configured[(Int, Int, Int)] rather than simply (Int, Int, Int). We didn't need to explicitly reference config anywhere.
As an aside, this is the way I almost always like to think about monads: they hide their effect so it's not polluting the flow of your code, while explicitly declaring the effect in the type signature. In other words, you needn't repeat yourself too much: you say "hey, this function deals with effect X" in the function's return type, and don't mess with it any further.
In this example, of course the effect is to read from some fixed environment. Another monadic effect you might be familiar with include error-handling: we can say that Option hides error-handling logic while making the possibility of errors explicit in your method's type. Or, sort of the opposite of reading, the Writer monad hides the thing we're writing to while making its presence explicit in the type system.
Now finally, just as we normally need to bootstrap a DI framework (somewhere outside our usual flow of control, such as in an XML file), we also need to bootstrap this curious monad. Surely we'll have some logical entry point to our code, such as:
def run: Configured[Unit] = // ...
It ends up being pretty simple: since Configured[A] is just a type synonym for the function ConfigSource => A, we can just apply the function to its "environment":
run(ConfigFileSource)
// or
run(DatabaseSource)
Ta-da! So, contrasting with the traditional Java-style DI approach, we don't have any "magic" occurring here. The only magic, as it were, is encapsulated in the definition of our Configured type and the way it behaves as a monad. Most importantly, the type system keeps us honest about which "realm" dependency injection is occurring in: anything with type Configured[...] is in the DI world, and anything without it is not. We simply don't get this in old-school DI, where everything is potentially managed by the magic, so you don't really know which portions of your code are safe to reuse outside of a DI framework (for example, within your unit tests, or in some other project entirely).
update: I wrote up a blog post which explains Reader in greater detail.
Recently I read following SO question :
Is there any use cases for employing the Visitor Pattern in Scala?
Should I use Pattern Matching in Scala every time I would have used
the Visitor Pattern in Java?
The link to the question with title:
Visitor Pattern in Scala. The accepted answer begins with
Yes, you should probably start off with pattern matching instead of
the visitor pattern. See this
http://www.artima.com/scalazine/articles/pattern_matching.html
My question (inspired by above mentioned question) is which GOF Design pattern(s) has entirely different implementation in Scala? Where should I be careful and not follow java based programming model of Design Patterns (Gang of Four), if I am programming in Scala?
Creational patterns
Abstract Factory
Builder
Factory Method
Prototype
Singleton : Directly create an Object (scala)
Structural patterns
Adapter
Bridge
Composite
Decorator
Facade
Flyweight
Proxy
Behavioral patterns
Chain of responsibility
Command
Interpreter
Iterator
Mediator
Memento
Observer
State
Strategy
Template method
Visitor : Patten Matching (scala)
For almost all of these, there are Scala alternatives that cover some but not all of the use cases for these patterns. All of this is IMO, of course, but:
Creational Patterns
Builder
Scala can do this more elegantly with generic types than can Java, but the general idea is the same. In Scala, the pattern is most simply implemented as follows:
trait Status
trait Done extends Status
trait Need extends Status
case class Built(a: Int, b: String) {}
class Builder[A <: Status, B <: Status] private () {
private var built = Built(0,"")
def setA(a0: Int) = { built = built.copy(a = a0); this.asInstanceOf[Builder[Done,B]] }
def setB(b0: String) = { built = built.copy(b = b0); this.asInstanceOf[Builder[A,Done]] }
def result(implicit ev: Builder[A,B] <:< Builder[Done,Done]) = built
}
object Builder {
def apply() = new Builder[Need, Need]
}
(If you try this in the REPL, make sure that the class and object Builder are defined in the same block, i.e. use :paste.) The combination of checking types with <:<, generic type arguments, and the copy method of case classes make a very powerful combination.
Factory Method (and Abstract Factory Method)
Factory methods' main use is to keep your types straight; otherwise you may as well use constructors. With Scala's powerful type system, you don't need help keeping your types straight, so you may as well use the constructor or an apply method in the companion object to your class and create things that way. In the companion-object case in particular, it is no harder to keep that interface consistent than it is to keep the interface in the factory object consistent. Thus, most of the motivation for factory objects is gone.
Similarly, many cases of abstract factory methods can be replaced by having a companion object inherit from an appropriate trait.
Prototype
Of course overridden methods and the like have their place in Scala. However, the examples used for the Prototype pattern on the Design Patterns web site are rather inadvisable in Scala (or Java IMO). However, if you wish to have a superclass select actions based on its subclasses rather than letting them decide for themselves, you should use match rather than the clunky instanceof tests.
Singleton
Scala embraces these with object. They are singletons--use and enjoy!
Structural Patterns
Adapter
Scala's trait provides much more power here--rather than creating a class that implements an interface, for example, you can create a trait which implements only part of the interface, leaving the rest for you to define. For example, java.awt.event.MouseMotionListener requires you to fill in two methods:
def mouseDragged(me: java.awt.event.MouseEvent)
def mouseMoved(me: java.awt.event.MouseEvent)
Maybe you want to ignore dragging. Then you write a trait:
trait MouseMoveListener extends java.awt.event.MouseMotionListener {
def mouseDragged(me: java.awt.event.MouseEvent) {}
}
Now you can implement only mouseMoved when you inherit from this. So: similar pattern, but much more power with Scala.
Bridge
You can write bridges in Scala. It's a huge amount of boilerplate, though not quite as bad as in Java. I wouldn't recommend routinely using this as a method of abstraction; think about your interfaces carefully first. Keep in mind that with the increased power of traits that you can often use those to simplify a more elaborate interface in a place where otherwise you might be tempted to write a bridge.
In some cases, you may wish to write an interface transformer instead of the Java bridge pattern. For example, perhaps you want to treat drags and moves of the mouse using the same interface with only a boolean flag distinguishing them. Then you can
trait MouseMotioner extends java.awt.event.MouseMotionListener {
def mouseMotion(me: java.awt.event.MouseEvent, drag: Boolean): Unit
def mouseMoved(me: java.awt.event.MouseEvent) { mouseMotion(me, false) }
def mouseDragged(me: java.awt.event.MouseEvent) { mouseMotion(me, true) }
}
This lets you skip the majority of the bridge pattern boilerplate while accomplishing a high degree of implementation independence and still letting your classes obey the original interface (so you don't have to keep wrapping and unwrapping them).
Composite
The composite pattern is particularly easy to achieve with case classes, though making updates is rather arduous. It is equally valuable in Scala and Java.
Decorator
Decorators are awkward. You usually don't want to use the same methods on a different class in the case where inheritance isn't exactly what you want; what you really want is a different method on the same class which does what you want instead of the default thing. The enrich-my-library pattern is often a superior substitute.
Facade
Facade works better in Scala than in Java because you can have traits carry partial implementations around so you don't have to do all the work yourself when you combine them.
Flyweight
Although the flyweight idea is as valid in Scala as Java, you have a couple more tools at your disposal to implement it: lazy val, where a variable is not created unless it's actually needed (and thereafter is reused), and by-name parameters, where you only do the work required to create a function argument if the function actually uses that value. That said, in some cases the Java pattern stands unchanged.
Proxy
Works the same way in Scala as Java.
Behavioral Patterns
Chain of responsibility
In those cases where you can list the responsible parties in order, you can
xs.find(_.handleMessage(m))
assuming that everyone has a handleMessage method that returns true if the message was handled. If you want to mutate the message as it goes, use a fold instead.
Since it's easy to drop responsible parties into a Buffer of some sort, the elaborate framework used in Java solutions rarely has a place in Scala.
Command
This pattern is almost entirely superseded by functions. For example, instead of all of
public interface ChangeListener extends EventListener {
void stateChanged(ChangeEvent e)
}
...
void addChangeListener(ChangeListener listener) { ... }
you simply
def onChange(f: ChangeEvent => Unit)
Interpreter
Scala provides parser combinators which are dramatically more powerful than the simple interpreter suggested as a Design Pattern.
Iterator
Scala has Iterator built into its standard library. It is almost trivial to make your own class extend Iterator or Iterable; the latter is usually better since it makes reuse trivial. Definitely a good idea, but so straightforward I'd hardly call it a pattern.
Mediator
This works fine in Scala, but is generally useful for mutable data, and even mediators can fall afoul of race conditions and such if not used carefully. Instead, try when possible to have your related data all stored in one immutable collection, case class, or whatever, and when making an update that requires coordinated changes, change all things at the same time. This won't help you interface with javax.swing, but is otherwise widely applicable:
case class Entry(s: String, d: Double, notes: Option[String]) {}
def parse(s0: String, old: Entry) = {
try { old.copy(s = s0, d = s0.toDouble) }
catch { case e: Exception => old }
}
Save the mediator pattern for when you need to handle multiple different relationships (one mediator for each), or when you have mutable data.
Memento
lazy val is nearly ideal for many of the simplest applications of the memento pattern, e.g.
class OneRandom {
lazy val value = scala.util.Random.nextInt
}
val r = new OneRandom
r.value // Evaluated here
r.value // Same value returned again
You may wish to create a small class specifically for lazy evaluation:
class Lazily[A](a: => A) {
lazy val value = a
}
val r = Lazily(scala.util.Random.nextInt)
// not actually called until/unless we ask for r.value
Observer
This is a fragile pattern at best. Favor, whenever possible, either keeping immutable state (see Mediator), or using actors where one actor sends messages to all others regarding the state change, but where each actor can cope with being out of date.
State
This is equally useful in Scala, and is actually the favored way to create enumerations when applied to methodless traits:
sealed trait DayOfWeek
final trait Sunday extends DayOfWeek
...
final trait Saturday extends DayOfWeek
(often you'd want the weekdays to do something to justify this amount of boilerplate).
Strategy
This is almost entirely replaced by having methods take functions that implement a strategy, and providing functions to choose from.
def printElapsedTime(t: Long, rounding: Double => Long = math.round) {
println(rounding(t*0.001))
}
printElapsedTime(1700, math.floor) // Change strategy
Template Method
Traits offer so many more possibilities here that it's best to just consider them another pattern. You can fill in as much code as you can from as much information as you have at your level of abstraction. I wouldn't really want to call it the same thing.
Visitor
Between structural typing and implicit conversion, Scala has astoundingly more capability than Java's typical visitor pattern. There's no point using the original pattern; you'll just get distracted from the right way to do it. Many of the examples are really just wishing there was a function defined on the thing being visited, which Scala can do for you trivially (i.e. convert an arbitrary method to a function).
Ok, let's have a brief look at these patterns. I'm looking at all these patterns purely from a functional programming point of view, and leaving out many things that Scala can improve from an OO point of view. Rex Kerr answer provides an interesting counter-point to my own answers (I only read his answer after writing my own).
With that in mind, I'd like to say that it is important to study persistent data structures (functionally pure data structures) and monads. If you want to go deep, I think category theory basics are important -- category theory can formally describe all program structures, including imperative ones.
Creational Patterns
A constructor is nothing more than a function. A parameterless constructor for type T is nothing more than a function () => T, for example. In fact, Scala's syntactical sugar for functions is taken advantage on case classes:
case class T(x: Int)
That is equivalent to:
class T(val x: Int) { /* bunch of methods */ }
object T {
def apply(x: Int) = new T(x)
/* other stuff */
}
So that you can instantiate T with T(n) instead of new T(n). You could even write it like this:
object T extends Int => T {
def apply(x: Int) = new T(x)
/* other stuff */
}
Which turns T into a formal function, without changing any code.
This is the important point to keep in mind when thinking of creational patterns. So let's look at them:
Abstract Factory
This one is unlikely to change much. A class can be thought of as a group of closely related functions, so a group of closely related functions is easily implemented through a class, which is what this pattern does for constructors.
Builder
Builder patterns can be replaced by curried functions or partial function applications.
def makeCar: Size => Engine => Luxuries => Car = ???
def makeLargeCars = makeCar(Size.Large) _
def makeCar: (Size, Engine, Luxuries) => Car = ???
def makeLargeCars = makeCar(Size.Large, _: Engine, _: Luxuries)
Factory Method
Becomes obsolete if you discard subclassing.
Prototype
Doesn't change -- in fact, this is a common way of creating data in functional data structures. See case classes copy method, or all non-mutable methods on collections which return collections.
Singleton
Singletons are not particularly useful when your data is immutable, but Scala object implements this pattern is a safe manner.
Structural Patterns
This is mostly related to data structures, and the important point on functional programming is that the data structures are usually immutable. You'd be better off looking at persistent data structures, monads and related concepts than trying to translate these patterns.
Not that some patterns here are not relevant. I'm just saying that, as a general rule, you should look into the things above instead of trying to translate structural patterns into functional equivalents.
Adapter
This pattern is related to classes (nominal typing), so it remains important as long as you have that, and is irrelevant when you don't.
Bridge
Related to OO architecture, so the same as above.
Composite
Lot at Lenses and Zippers.
Decorator
A Decorator is just function composition. If you are decorating a whole class, that may not apply. But if you provide your functionality as functions, then composing a function while maintaining its type is a decorator.
Facade
Same comment as for Bridge.
Flyweight
If you think of constructors as functions, think of flyweight as function memoization. Also, Flyweight is intrinsic related to how persistent data structures are built, and benefits a lot from immutability.
Proxy
Same comment as for Adapter.
Behavioral Patterns
This is all over the place. Some of them are completely useless, while others are as relevant as always in a functional setting.
Chain of Responsibility
Like Decorator, this is function composition.
Command
This is a function. The undo part is not necessary if your data is immutable. Otherwise, just keep a pair of function and its reverse. See also Lenses.
Interpreter
This is a monad.
Iterator
It can be rendered obsolete by just passing a function to the collection. That's what Traversable does with foreach, in fact. Also, see Iteratee.
Mediator
Still relevant.
Memento
Useless with immutable objects. Also, its point is keeping encapsulation, which is not a major concern in FP.
Note that this pattern is not serialization, which is still relevant.
Observer
Relevant, but see Functional Reactive Programming.
State
This is a monad.
Strategy
A strategy is a function.
Template Method
This is an OO design pattern, so it's relevant for OO designs.
Visitor
A visitor is just a method receiving a function. In fact, that's what Traversable's foreach does.
In Scala, it can also be replaced with extractors.
I suppose, Command pattern not needed in functional languages at all. Instead of encapsulation command function inside object and then selecting appropriate object, just use appropriate function itself.
Flyweight is just cache, and has default implementation in most functional languages (memoize in clojure)
Even Template method, Strategy and State can be implemented with just passing appropriate function in method.
So, I recommend to not go deep in Design Patterns when you tries yourself in functional style but reading some books about functional concepts (high-order functions, laziness, currying, and so on)
I've noticed that the Scala standard library uses two different strategies for organizing classes, traits, and singleton objects.
Using packages whose members are them imported. This is, for example, how you get access to scala.collection.mutable.ListBuffer. This technique is familiar coming from Java, Python, etc.
Using type members of traits. This is, for example, how you get access to the Parser type. You first need to mix in scala.util.parsing.combinator.Parsers. This technique is not familiar coming from Java, Python, etc, and isn't much used in third-party libraries.
I guess one advantage of (2) is that it organizes both methods and types, but in light of Scala 2.8's package objects the same can be done using (1). Why have both these strategies? When should each be used?
The nomenclature of note here is path-dependent types. That's the option number 2 you talk of, and I'll speak only of it. Unless you happen to have a problem solved by it, you should always take option number 1.
What you miss is that the Parser class makes reference to things defined in the Parsers class. In fact, the Parser class itself depends on what input has been defined on Parsers:
abstract class Parser[+T] extends (Input => ParseResult[T])
The type Input is defined like this:
type Input = Reader[Elem]
And Elem is abstract. Consider, for instance, RegexParsers and TokenParsers. The former defines Elem as Char, while the latter defines it as Token. That means the Parser for the each is different. More importantly, because Parser is a subclass of Parsers, the Scala compiler will make sure at compile time you aren't passing the RegexParsers's Parser to TokenParsers or vice versa. As a matter of fact, you won't even be able to pass the Parser of one instance of RegexParsers to another instance of it.
The second is also known as the Cake pattern.
It has the benefit that the code inside the class that has a trait mixed in becomes independent of the particular implementation of the methods and types in that trait. It allows to use the members of the trait without knowing what's their concrete implementation.
trait Logging {
def log(msg: String)
}
trait App extends Logging {
log("My app started.")
}
Above, the Logging trait is the requirement for the App (requirements can also be expressed with self-types). Then, at some point in your application you can decide what the implementation will be and mix the implementation trait into the concrete class.
trait ConsoleLogging extends Logging {
def log(msg: String) = println(msg)
}
object MyApp extends App with ConsoleLogging
This has an advantage over imports, in the sense that the requirements of your piece of code aren't bound to the implementation defined by the import statement. Furthermore, it allows you to build and distribute an API which can be used in a different build somewhere else provided that its requirements are met by mixing in a concrete implementation.
However, there are a few things to be careful with when using this pattern.
All of the classes defined inside the trait will have a reference to the outer class. This can be an issue where performance is concerned, or when you're using serialization (when the outer class is not serializable, or worse, if it is, but you don't want it to be serialized).
If your 'module' gets really large, you will either have a very big trait and a very big source file, or will have to distribute the module trait code across several files. This can lead to some boilerplate.
It can force you to have to write your entire application using this paradigm. Before you know it, every class will have to have its requirements mixed in.
The concrete implementation must be known at compile time, unless you use some sort of hand-written delegation. You cannot mix in an implementation trait dynamically based on a value available at runtime.
I guess the library designers didn't regard any of the above as an issue where Parsers are concerned.
I just read and enjoyed the Cake pattern article. However, to my mind, one of the key reasons to use dependency injection is that you can vary the components being used by either an XML file or command-line arguments.
How is that aspect of DI handled with the Cake pattern? The examples I've seen all involve mixing traits in statically.
Since mixing in traits is done statically in Scala, if you want to vary the traits mixed in to an object, create different objects based on some condition.
Let's take a canonical cake pattern example. Your modules are defined as traits, and your application is constructed as a simple Object with a bunch of functionality mixed in
val application =
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now all of those modules have nice self-type declarations which define their inter-module dependencies, so that line only compiles if your all inter-module dependencies exist, are unique, and well-typed. In particular, the Persistence module has a self-type which says that anything implementing Persistence must also implement DataSource, an abstract module trait. Since ProductionDataSource inherits from DataSource, everything's great, and that application construction line compiles.
But what if you want to use a different DataSource, pointing at some local database for testing purposes? Assume further that you can't just reuse ProductionDataSource with different configuration parameters, loaded from some properties file. What you would do in that case is define a new trait TestDataSource which extends DataSource, and mix it in instead. You could even do so dynamically based on a command line flag.
val application = if (test)
new Object
extends Communications
with Parsing
with Persistence
with Logging
with TestDataSource
else
new Object
extends Communications
with Parsing
with Persistence
with Logging
with ProductionDataSource
application.startup
Now that looks a bit more verbose than we would like, particularly if your application needs to vary its construction on multiple axes. On the plus side, you usually you only have one chunk of conditional construction logic like that in an application (or at worst once per identifiable component lifecycle), so at least the pain is minimized and fenced off from the rest of your logic.
Scala is also a script language. So your configuration XML can be a Scala script. It is type-safe and not-a-different-language.
Simply look at startup:
scala -cp first.jar:second.jar startupScript.scala
is not so different than:
java -cp first.jar:second.jar com.example.MyMainClass context.xml
You can always use DI, but you have one more tool.
The short answer is that Scala doesn't currently have any built-in support for dynamic mixins.
I am working on the autoproxy-plugin to support this, although it's currently on hold until the 2.9 release, when the compiler will have new features making it a much easier task.
In the meantime, the best way to achieve almost exactly the same functionality is by implementing your dynamically added behavior as a wrapper class, then adding an implicit conversion back to the wrapped member.
Until the AutoProxy plugin becomes available, one way to achieve the effect is to use delegation:
trait Module {
def foo: Int
}
trait DelegatedModule extends Module {
var delegate: Module = _
def foo = delegate.foo
}
class Impl extends Module {
def foo = 1
}
// later
val composed: Module with ... with ... = new DelegatedModule with ... with ...
composed.delegate = choose() // choose is linear in the number of `Module` implementations
But beware, the downside of this is that it's more verbose, and you have to be careful about the initialization order if you use vars inside a trait. Another downside is that if there are path dependent types within Module above, you won't be able to use delegation that easily.
But if there is a large number of different implementations that can be varied, it will probably cost you less code than listing cases with all possible combinations.
Lift has something along those lines built in. It's mostly in scala code, but you have some runtime control. http://www.assembla.com/wiki/show/liftweb/Dependency_Injection