Configuration data in Scala -- should I use the Reader monad? - scala

How do I create a properly functional configurable object in Scala? I have watched Tony Morris' video on the Reader monad and I'm still unable to connect the dots.
I have a hard-coded list of Client objects:
class Client(name : String, age : Int){ /* etc */}
object Client{
//Horrible!
val clients = List(Client("Bob", 20), Client("Cindy", 30))
}
I want Client.clients to be determined at runtime, with the flexibility of either reading it from a properties file or from a database. In the Java world I'd define an interface, implement the two types of source, and use DI to assign a class variable:
trait ConfigSource {
def clients : List[Client]
}
object ConfigFileSource extends ConfigSource {
override def clients = buildClientsFromProperties(Properties("clients.properties"))
//...etc, read properties files
}
object DatabaseSource extends ConfigSource { /* etc */ }
object Client {
#Resource("configuration_source")
private var config : ConfigSource = _ //Inject it at runtime
val clients = config.clients
}
This seems like a pretty clean solution to me (not a lot of code, clear intent), but that var does jump out (OTOH, it doesn't seem to me really troublesome, since I know it will be injected once-and-only-once).
What would the Reader monad look like in this situation and, explain it to me like I'm 5, what are its advantages?

Let's start with a simple, superficial difference between your approach and the Reader approach, which is that you no longer need to hang onto config anywhere at all. Let's say you define the following vaguely clever type synonym:
type Configured[A] = ConfigSource => A
Now, if I ever need a ConfigSource for some function, say a function that gets the n'th client in the list, I can declare that function as "configured":
def nthClient(n: Int): Configured[Client] = {
config => config.clients(n)
}
So we're essentially pulling a config out of thin air, any time we need one! Smells like dependency injection, right? Now let's say we want the ages of the first, second and third clients in the list (assuming they exist):
def ages: Configured[(Int, Int, Int)] =
for {
a0 <- nthClient(0)
a1 <- nthClient(1)
a2 <- nthClient(2)
} yield (a0.age, a1.age, a2.age)
For this, of course, you need some appropriate definition of map and flatMap. I won't get into that here, but will simply say that Scalaz (or Rúnar's awesome NEScala talk, or Tony's which you've seen already) gives you all you need.
The important point here is that the ConfigSource dependency and its so-called injection are mostly hidden. The only "hint" that we can see here is that ages is of type Configured[(Int, Int, Int)] rather than simply (Int, Int, Int). We didn't need to explicitly reference config anywhere.
As an aside, this is the way I almost always like to think about monads: they hide their effect so it's not polluting the flow of your code, while explicitly declaring the effect in the type signature. In other words, you needn't repeat yourself too much: you say "hey, this function deals with effect X" in the function's return type, and don't mess with it any further.
In this example, of course the effect is to read from some fixed environment. Another monadic effect you might be familiar with include error-handling: we can say that Option hides error-handling logic while making the possibility of errors explicit in your method's type. Or, sort of the opposite of reading, the Writer monad hides the thing we're writing to while making its presence explicit in the type system.
Now finally, just as we normally need to bootstrap a DI framework (somewhere outside our usual flow of control, such as in an XML file), we also need to bootstrap this curious monad. Surely we'll have some logical entry point to our code, such as:
def run: Configured[Unit] = // ...
It ends up being pretty simple: since Configured[A] is just a type synonym for the function ConfigSource => A, we can just apply the function to its "environment":
run(ConfigFileSource)
// or
run(DatabaseSource)
Ta-da! So, contrasting with the traditional Java-style DI approach, we don't have any "magic" occurring here. The only magic, as it were, is encapsulated in the definition of our Configured type and the way it behaves as a monad. Most importantly, the type system keeps us honest about which "realm" dependency injection is occurring in: anything with type Configured[...] is in the DI world, and anything without it is not. We simply don't get this in old-school DI, where everything is potentially managed by the magic, so you don't really know which portions of your code are safe to reuse outside of a DI framework (for example, within your unit tests, or in some other project entirely).
update: I wrote up a blog post which explains Reader in greater detail.

Related

How to use Scala Cats Validated the correct way?

Following is my use case
I am using Cats for validation of my config. My config file is in json.
I deserialize my config file to my case class Config using lift-json and then validate it using Cats. I am using this as a guide.
My motive for using Cats is to collect all errors iff present at time of validation.
My problem is the examples given in the guide, are of the type
case class Person(name: String, age: Int)
def validatePerson(name: String, age: Int): ValidationResult[Person] = {
(validateName(name),validate(age)).mapN(Person)
}
But in my case I already deserialized my config into my case class ( below is a sample ) and then I am passing it for validation
case class Config(source: List[String], dest: List[String], extra: List[String])
def vaildateConfig(config: Config): ValidationResult[Config] = {
(validateSource(config.source), validateDestination(config.dest))
.mapN { case _ => config }
}
The difference here is mapN { case _ => config }. As I already have a config if everything is valid I dont want to create the config anew from its members. This arises as I am passing config to validate function not it's members.
A person at my workplace told me this is not the correct way, as Cats Validated provides a way to construct an object if its members are valid. The object should not exist or should not be constructible if its members are invalid. Which makes complete sense to me.
So should I make any changes ? Is the above I'm doing acceptable ?
PS : The above Config is just an example, my real config can have other case classes as its members which themselves can depend on other case classes.
One of the central goals of the kind of programming promoted by libraries like Cats is to make invalid states unrepresentable. In a perfect world, according to this philosophy, it would be impossible to create an instance of Config with invalid member data (through the use of a library like Refined, where complex constraints can be expressed in and tracked by the type system, or simply by hiding unsafe constructors). In a slightly less perfect world, it might still be possible to construct invalid instances of Config, but discouraged, e.g. through the use of safe constructors (like your validatePerson method for Person).
It sounds like you're in an even less perfect world where you have instances of Config that may or may not contain invalid data, and you want to validate them to get "new" instances of Config that you know are valid. This is totally possible, and in some cases reasonable, and your validateConfig method is a perfectly legitimate way to solve this problem, if you're stuck in that imperfect world.
The downside, though, is that the compiler can't track the difference between the already-validated Config instances and the not-yet-validated ones. You'll have Config instances floating around in your program, and if you want to know whether they've already been validated or not, you'll have to trace through all the places they could have come from. In some contexts this might be just fine, but for large or complex programs it's not ideal.
To sum up: ideally you'd validate Config instances whenever they are created (possibly even making it impossible to create invalid ones), so that you don't have to remember whether any given Config is good or not—the type system can remember for you. If that's not possible, because of e.g. APIs or definitions you don't control, or if it just seems too burdensome for a simple use case, what you're doing with validateConfig is totally reasonable.
As a footnote, since you say above that you're interested in looking in more detail at Refined, what it provides for you in a situation like this is a way to avoid even more functions of the shape A => ValidationResult[A]. Right now your validateName method, for example, probably takes a String and returns a ValidationResult[String]. You can make exactly the same argument against this signature as I have against Config => ValidationResult[Config] above—once you're working with the result (by mapping a function over the Validated or whatever), you just have a string, and the type doesn't tell you that it's already been validated.
What Refined allows you to do is write a method like this:
def validateName(in: String): ValidationResult[Refined[String, SomeProperty]] = ...
…where SomeProperty might specify a minimum length, or the fact that the string matches a particular regular expression, etc. The important point is that you're not validating a String and returning a String that only you know something about—you're validating a String and returning a String that the compiler knows something about (via the Refined[A, Prop] wrapper).
Again, this may be (okay, probably is) overkill for your use case—you just might find it nice to know that you can push this principle (tracking validation in types) even further down through your program.

in scala what is cleaner code - to def (~member) versus pass function parameter?

Which is cleaner?
def version
trait Foo {
def db: DB
def save() = db.save()
def load() = db.load()
}
versus parametric version:
trait Foo {
def save(db: DB) = db.save()
def load(db: DB) = db.load()
}
(left out intentionaly other parameters/members I want to focus on this one).
I have to say that when I look at complex projects I thank god when functions are taking all their dependencies in
I can unit test them easily without overriding members, the functions tells me all that it's dependent upon on its signature.
I don't have to read their internal code to understand better what the function does, I have its name, I have its input, I have its output all in function signature.
But I also noticed that in scala its very conventional to use the def version, and I have to say that this code when it comes bundled in complex projects such code is much less readable for me. Am I missing something?
I think in this case it highly depends on what the relationship is between Foo and DB. Would it ever be the case that a single instance of Foo would use one DB for load and another for save? If yes, then DB isn't really a dependency of Foo and the first example makes no sense. But it seems to me that the answer is no, that if you call load with one DB, you'll be using the same DB when you call save.
In your first example, that information is encoded into the type system. You're effectively letting the compiler do some correctness checking for you, since now you're enforcing at compile-time that for a single Foo, load and save will be called on the same DB (yes it's possible that db is a var, but that in itself is another issue).
Furthermore, it seems inevitable that you're just going to be passing around a DB every place you pass a Foo. Suppose you have a function that uses Foo. In the first example, your function would look like
def loadFoo(foo: Foo) {
foo.load()
}
whereas in the second it would look like:
def loadFoo(foo: Foo, db: DB) {
foo.load(db)
}
So all you've done is lengthened every function signature and opened up room for errors.
Lastly, I would argue that your points about unit testing and not needing to read a function's code are invalid. In the first example, it's true that you can't see all of load's dependencies just by looking at the function signature. But load is not an isolated function, it is a method that is part of a trait. A method is not identical to a plain old function and they exist in the context of their defining trait.
In other words, you should not be thinking about unit testing the functions, but rather unit testing the trait. They're a package deal and you should have no expectations that their behavior is independent of each other. If you do want that kind of independance, than Foo should be an object which basically makes load and save static methods (although even then objects can have internal state, but that is far less idiomatic).
Plus, you can never really tell what a function is doing just by looking at its dependencies. After all I could write a function:
def save(db: DB){
throw new Exception("hello!!")
}

Writing macros which generate statements

With intention of reducing the boilerplate for the end-user when deriving instances of a certain typeclass (let's take Showable for example), I aim to write a macro, which will autogenerate the instance names. E.g.:
// calling this:
Showable.derive[Int]
Showable.derive[String]
// should expand to this:
// implicit val derivedShowableInstance0 = new Showable[Int] { ... }
// implicit val derivedShowableInstance1 = new Showable[String] { ... }
I tried to approach the problem the following way, but the compiler complained that the expression should return a < no type > instead of Unit:
object Showable {
def derive[a] = macro Macros.derive[a]
object Macros {
private var instanceNameIndex = 0
def derive[ a : c.WeakTypeTag ]( c : Context ) = {
import c.universe._
import Flag._
val newInstanceDeclaration = ...
c.Expr[Unit](
ValDef(
Modifiers(IMPLICIT),
newTermName("derivedShowableInstance" + nameIndex),
TypeTree(),
newInstanceDeclaration
)
)
}
}
}
I get that a val declaration is not exactly an expression and hence Unit might not be appropriate, but then how to make it right?
Is this at all possible? If not then why, will it ever be, and are there any workarounds?
Yes, that's right. Declarations/definitions aren't expressions, so they need to be wrapped into Unit-returning blocks to become ones. Typically Scala does this automatically, but in this particular case you need to do it yourself.
However if you wrap a definition in a block, then it becomes invisible from the outside. That's the limitation of the current macro system that strictly follows the metaphor of "macro application is very much similar to a typed function call". Function calls don't bring new members into scope, so neither do def macros - both for technical and philosophical reasons. As shown in the example #3 of my recent "What Are Macros Good For?" talk, by using structural types def macros can work around this limitation, but this doesn't look particularly related to your question, so I'll omit the details.
On the other hand, there are some ideas how to overcome this limitation with new macro flavors. Macro annotations show that it's technically feasible for the macro engine to hook into new member creation, but we'd like to get more experience with macro annotations before bringing them into trunk. Some details about this can be found in my "Philosophy of Scala Macros" presentation. This can be useful in your situation, but I still won't go into details, because I think there's a much better solution for this particular case (feel free to ask for elaboration in comments, though!).
What I'd like to recommend you is to use materialization as described in http://docs.scala-lang.org/overviews/macros/implicits.html. With materializing macros you can automatically generate type class instances for the user without having the user write any code at all. Would that fit your use case?

Why not mark val or var variables as private?

Coming from a java background I always mark instance variables as private. I'm learning scala and almost all of the code I have viewed the val/var instances have default (public) access. Why is this the access ? Does it not break information hiding/encapsulation principle ?
It would help it you specified which code, but keep in mind that some example code is in a simplified form to highlight whatever it is that the example is supposed to show you. Since the default access is public, that means that you often get the modifiers left off for simplicity.
That said, since a val is immutable, there's not much harm in leaving it public as long as you recognize that this is now part of the API for your class. That can be perfectly okay:
class DataThingy(data: Array[Double) {
val sum = data.sum
}
Or it can be an implementation detail that you shouldn't expose:
class Statistics(data: Array[Double]) {
val sum = data.sum
val sumOfSquares = data.map(x => x*x).sum
val expectationSquared = (sum * sum)/(data.length*data.length)
val expectationOfSquare = sumOfSquares/data.length
val varianceOfSample = expectationOfSquare - expectationSquared
val standardDeviation = math.sqrt(data.length*varianceOfSample/(data.length-1))
}
Here, we've littered our class with all of the intermediate steps for calculating standard deviation. And this is especially foolish given that this is not the most numerically stable way to calculate standard deviation with floating point numbers.
Rather than merely making all of these private, it is better style, if possible, to use local blocks or private[this] defs to perform the intermediate computations:
val sum = data.sum
val standardDeviation = {
val sumOfSquares = ...
...
math.sqrt(...)
}
or
val sum = data.sum
private[this] def findSdFromSquares(s: Double, ssq: Double) = { ... }
val standardDeviation = findMySD(sum, data.map(x => x*x).sum)
If you need to store a calculation for later use, then private val or private[this] val is the way to go, but if it's just an intermediate step on the computation, the options above are better.
Likewise, there's no harm in exposing a var if it is a part of the interface--a vector coordinate on a mutable vector for instance. But you should make them private (better yet: private[this], if you can!) when it's an implementation detail.
One important difference between Java and Scala here is that in Java you can not replace a public variable with getter and setter methods (or vice versa) without breaking source and binary compatibility. In Scala you can.
So in Java if you have a public variable, the fact that it's a variable will be exposed to the user and if you ever change it, the user has to change his code. In Scala you can replace a public var with a getter and setter method (or a public val with just a getter method) without the user ever knowing the difference. So in that sense no implementation details are exposed.
As an example, let's consider a rectangle class:
class Rectangle(val width: Int, val height:Int) {
val area = width * height
}
Now what happens if we later decide that we don't want the area to be stored as a variable, but rather it should be calculated each time it's called?
In Java the situation would be like this: If we had used a getter method and a private variable, we could just remove the variable and change the getter method to calculate the area instead of using the variable. No changes to user code needed. But since we've used a public variable, we are now forced to break user code :-(
In Scala it's different: we can just change the val to def and that's it. No changes to user code needed.
Actually, some Scala developers tend to use default access too much. But you can find appropriate examples in famous Scala projects(for example, Twitter's Finagle).
On the other hand, creating objects as immutable values is the standard way in Scala. We don't need to hide all the attributes if they're immutable completely.
I'd like to answer the question with a bit more generic approach. I think the answer you are looking for has to do with the design paradigms on which Scala is built. Instead of the classical prodecural / object oriented approach, like you see in Java, functional programming is used to a much higher extend. I cannot cover all the code that you mention of course, but in general (well written) Scala code will not need a lot of mutability.
As pointed out by Rex, val's are immutable, so there are few reasons for them to not be public. But as I see it the immutability is not a goal in itself, but a result of functional programming. So if we consider functions as something like x -> function -> y the function part becomes somewhat of a black box; we don't really care what it does, as long as it does it correctly. As the Haskell Wiki writes:
Purely functional programs typically operate on immutable data. Instead of altering existing values, altered copies are created and the original is preserved.
This also explains the missing closure, since the parts we traditionally wanted to hide away is executed in the functions and thus hidden anyway.
So, to cut things short, I would argue that mutability and closure has become more redundant in Scala. And why clutter things up with getters and setter when it can be avoided?

which GOF Design pattern(s) has entirely different implementation (java vs Scala)

Recently I read following SO question :
Is there any use cases for employing the Visitor Pattern in Scala?
Should I use Pattern Matching in Scala every time I would have used
the Visitor Pattern in Java?
The link to the question with title:
Visitor Pattern in Scala. The accepted answer begins with
Yes, you should probably start off with pattern matching instead of
the visitor pattern. See this
http://www.artima.com/scalazine/articles/pattern_matching.html
My question (inspired by above mentioned question) is which GOF Design pattern(s) has entirely different implementation in Scala? Where should I be careful and not follow java based programming model of Design Patterns (Gang of Four), if I am programming in Scala?
Creational patterns
Abstract Factory
Builder
Factory Method
Prototype
Singleton : Directly create an Object (scala)
Structural patterns
Adapter
Bridge
Composite
Decorator
Facade
Flyweight
Proxy
Behavioral patterns
Chain of responsibility
Command
Interpreter
Iterator
Mediator
Memento
Observer
State
Strategy
Template method
Visitor : Patten Matching (scala)
For almost all of these, there are Scala alternatives that cover some but not all of the use cases for these patterns. All of this is IMO, of course, but:
Creational Patterns
Builder
Scala can do this more elegantly with generic types than can Java, but the general idea is the same. In Scala, the pattern is most simply implemented as follows:
trait Status
trait Done extends Status
trait Need extends Status
case class Built(a: Int, b: String) {}
class Builder[A <: Status, B <: Status] private () {
private var built = Built(0,"")
def setA(a0: Int) = { built = built.copy(a = a0); this.asInstanceOf[Builder[Done,B]] }
def setB(b0: String) = { built = built.copy(b = b0); this.asInstanceOf[Builder[A,Done]] }
def result(implicit ev: Builder[A,B] <:< Builder[Done,Done]) = built
}
object Builder {
def apply() = new Builder[Need, Need]
}
(If you try this in the REPL, make sure that the class and object Builder are defined in the same block, i.e. use :paste.) The combination of checking types with <:<, generic type arguments, and the copy method of case classes make a very powerful combination.
Factory Method (and Abstract Factory Method)
Factory methods' main use is to keep your types straight; otherwise you may as well use constructors. With Scala's powerful type system, you don't need help keeping your types straight, so you may as well use the constructor or an apply method in the companion object to your class and create things that way. In the companion-object case in particular, it is no harder to keep that interface consistent than it is to keep the interface in the factory object consistent. Thus, most of the motivation for factory objects is gone.
Similarly, many cases of abstract factory methods can be replaced by having a companion object inherit from an appropriate trait.
Prototype
Of course overridden methods and the like have their place in Scala. However, the examples used for the Prototype pattern on the Design Patterns web site are rather inadvisable in Scala (or Java IMO). However, if you wish to have a superclass select actions based on its subclasses rather than letting them decide for themselves, you should use match rather than the clunky instanceof tests.
Singleton
Scala embraces these with object. They are singletons--use and enjoy!
Structural Patterns
Adapter
Scala's trait provides much more power here--rather than creating a class that implements an interface, for example, you can create a trait which implements only part of the interface, leaving the rest for you to define. For example, java.awt.event.MouseMotionListener requires you to fill in two methods:
def mouseDragged(me: java.awt.event.MouseEvent)
def mouseMoved(me: java.awt.event.MouseEvent)
Maybe you want to ignore dragging. Then you write a trait:
trait MouseMoveListener extends java.awt.event.MouseMotionListener {
def mouseDragged(me: java.awt.event.MouseEvent) {}
}
Now you can implement only mouseMoved when you inherit from this. So: similar pattern, but much more power with Scala.
Bridge
You can write bridges in Scala. It's a huge amount of boilerplate, though not quite as bad as in Java. I wouldn't recommend routinely using this as a method of abstraction; think about your interfaces carefully first. Keep in mind that with the increased power of traits that you can often use those to simplify a more elaborate interface in a place where otherwise you might be tempted to write a bridge.
In some cases, you may wish to write an interface transformer instead of the Java bridge pattern. For example, perhaps you want to treat drags and moves of the mouse using the same interface with only a boolean flag distinguishing them. Then you can
trait MouseMotioner extends java.awt.event.MouseMotionListener {
def mouseMotion(me: java.awt.event.MouseEvent, drag: Boolean): Unit
def mouseMoved(me: java.awt.event.MouseEvent) { mouseMotion(me, false) }
def mouseDragged(me: java.awt.event.MouseEvent) { mouseMotion(me, true) }
}
This lets you skip the majority of the bridge pattern boilerplate while accomplishing a high degree of implementation independence and still letting your classes obey the original interface (so you don't have to keep wrapping and unwrapping them).
Composite
The composite pattern is particularly easy to achieve with case classes, though making updates is rather arduous. It is equally valuable in Scala and Java.
Decorator
Decorators are awkward. You usually don't want to use the same methods on a different class in the case where inheritance isn't exactly what you want; what you really want is a different method on the same class which does what you want instead of the default thing. The enrich-my-library pattern is often a superior substitute.
Facade
Facade works better in Scala than in Java because you can have traits carry partial implementations around so you don't have to do all the work yourself when you combine them.
Flyweight
Although the flyweight idea is as valid in Scala as Java, you have a couple more tools at your disposal to implement it: lazy val, where a variable is not created unless it's actually needed (and thereafter is reused), and by-name parameters, where you only do the work required to create a function argument if the function actually uses that value. That said, in some cases the Java pattern stands unchanged.
Proxy
Works the same way in Scala as Java.
Behavioral Patterns
Chain of responsibility
In those cases where you can list the responsible parties in order, you can
xs.find(_.handleMessage(m))
assuming that everyone has a handleMessage method that returns true if the message was handled. If you want to mutate the message as it goes, use a fold instead.
Since it's easy to drop responsible parties into a Buffer of some sort, the elaborate framework used in Java solutions rarely has a place in Scala.
Command
This pattern is almost entirely superseded by functions. For example, instead of all of
public interface ChangeListener extends EventListener {
void stateChanged(ChangeEvent e)
}
...
void addChangeListener(ChangeListener listener) { ... }
you simply
def onChange(f: ChangeEvent => Unit)
Interpreter
Scala provides parser combinators which are dramatically more powerful than the simple interpreter suggested as a Design Pattern.
Iterator
Scala has Iterator built into its standard library. It is almost trivial to make your own class extend Iterator or Iterable; the latter is usually better since it makes reuse trivial. Definitely a good idea, but so straightforward I'd hardly call it a pattern.
Mediator
This works fine in Scala, but is generally useful for mutable data, and even mediators can fall afoul of race conditions and such if not used carefully. Instead, try when possible to have your related data all stored in one immutable collection, case class, or whatever, and when making an update that requires coordinated changes, change all things at the same time. This won't help you interface with javax.swing, but is otherwise widely applicable:
case class Entry(s: String, d: Double, notes: Option[String]) {}
def parse(s0: String, old: Entry) = {
try { old.copy(s = s0, d = s0.toDouble) }
catch { case e: Exception => old }
}
Save the mediator pattern for when you need to handle multiple different relationships (one mediator for each), or when you have mutable data.
Memento
lazy val is nearly ideal for many of the simplest applications of the memento pattern, e.g.
class OneRandom {
lazy val value = scala.util.Random.nextInt
}
val r = new OneRandom
r.value // Evaluated here
r.value // Same value returned again
You may wish to create a small class specifically for lazy evaluation:
class Lazily[A](a: => A) {
lazy val value = a
}
val r = Lazily(scala.util.Random.nextInt)
// not actually called until/unless we ask for r.value
Observer
This is a fragile pattern at best. Favor, whenever possible, either keeping immutable state (see Mediator), or using actors where one actor sends messages to all others regarding the state change, but where each actor can cope with being out of date.
State
This is equally useful in Scala, and is actually the favored way to create enumerations when applied to methodless traits:
sealed trait DayOfWeek
final trait Sunday extends DayOfWeek
...
final trait Saturday extends DayOfWeek
(often you'd want the weekdays to do something to justify this amount of boilerplate).
Strategy
This is almost entirely replaced by having methods take functions that implement a strategy, and providing functions to choose from.
def printElapsedTime(t: Long, rounding: Double => Long = math.round) {
println(rounding(t*0.001))
}
printElapsedTime(1700, math.floor) // Change strategy
Template Method
Traits offer so many more possibilities here that it's best to just consider them another pattern. You can fill in as much code as you can from as much information as you have at your level of abstraction. I wouldn't really want to call it the same thing.
Visitor
Between structural typing and implicit conversion, Scala has astoundingly more capability than Java's typical visitor pattern. There's no point using the original pattern; you'll just get distracted from the right way to do it. Many of the examples are really just wishing there was a function defined on the thing being visited, which Scala can do for you trivially (i.e. convert an arbitrary method to a function).
Ok, let's have a brief look at these patterns. I'm looking at all these patterns purely from a functional programming point of view, and leaving out many things that Scala can improve from an OO point of view. Rex Kerr answer provides an interesting counter-point to my own answers (I only read his answer after writing my own).
With that in mind, I'd like to say that it is important to study persistent data structures (functionally pure data structures) and monads. If you want to go deep, I think category theory basics are important -- category theory can formally describe all program structures, including imperative ones.
Creational Patterns
A constructor is nothing more than a function. A parameterless constructor for type T is nothing more than a function () => T, for example. In fact, Scala's syntactical sugar for functions is taken advantage on case classes:
case class T(x: Int)
That is equivalent to:
class T(val x: Int) { /* bunch of methods */ }
object T {
def apply(x: Int) = new T(x)
/* other stuff */
}
So that you can instantiate T with T(n) instead of new T(n). You could even write it like this:
object T extends Int => T {
def apply(x: Int) = new T(x)
/* other stuff */
}
Which turns T into a formal function, without changing any code.
This is the important point to keep in mind when thinking of creational patterns. So let's look at them:
Abstract Factory
This one is unlikely to change much. A class can be thought of as a group of closely related functions, so a group of closely related functions is easily implemented through a class, which is what this pattern does for constructors.
Builder
Builder patterns can be replaced by curried functions or partial function applications.
def makeCar: Size => Engine => Luxuries => Car = ???
def makeLargeCars = makeCar(Size.Large) _
def makeCar: (Size, Engine, Luxuries) => Car = ???
def makeLargeCars = makeCar(Size.Large, _: Engine, _: Luxuries)
Factory Method
Becomes obsolete if you discard subclassing.
Prototype
Doesn't change -- in fact, this is a common way of creating data in functional data structures. See case classes copy method, or all non-mutable methods on collections which return collections.
Singleton
Singletons are not particularly useful when your data is immutable, but Scala object implements this pattern is a safe manner.
Structural Patterns
This is mostly related to data structures, and the important point on functional programming is that the data structures are usually immutable. You'd be better off looking at persistent data structures, monads and related concepts than trying to translate these patterns.
Not that some patterns here are not relevant. I'm just saying that, as a general rule, you should look into the things above instead of trying to translate structural patterns into functional equivalents.
Adapter
This pattern is related to classes (nominal typing), so it remains important as long as you have that, and is irrelevant when you don't.
Bridge
Related to OO architecture, so the same as above.
Composite
Lot at Lenses and Zippers.
Decorator
A Decorator is just function composition. If you are decorating a whole class, that may not apply. But if you provide your functionality as functions, then composing a function while maintaining its type is a decorator.
Facade
Same comment as for Bridge.
Flyweight
If you think of constructors as functions, think of flyweight as function memoization. Also, Flyweight is intrinsic related to how persistent data structures are built, and benefits a lot from immutability.
Proxy
Same comment as for Adapter.
Behavioral Patterns
This is all over the place. Some of them are completely useless, while others are as relevant as always in a functional setting.
Chain of Responsibility
Like Decorator, this is function composition.
Command
This is a function. The undo part is not necessary if your data is immutable. Otherwise, just keep a pair of function and its reverse. See also Lenses.
Interpreter
This is a monad.
Iterator
It can be rendered obsolete by just passing a function to the collection. That's what Traversable does with foreach, in fact. Also, see Iteratee.
Mediator
Still relevant.
Memento
Useless with immutable objects. Also, its point is keeping encapsulation, which is not a major concern in FP.
Note that this pattern is not serialization, which is still relevant.
Observer
Relevant, but see Functional Reactive Programming.
State
This is a monad.
Strategy
A strategy is a function.
Template Method
This is an OO design pattern, so it's relevant for OO designs.
Visitor
A visitor is just a method receiving a function. In fact, that's what Traversable's foreach does.
In Scala, it can also be replaced with extractors.
I suppose, Command pattern not needed in functional languages at all. Instead of encapsulation command function inside object and then selecting appropriate object, just use appropriate function itself.
Flyweight is just cache, and has default implementation in most functional languages (memoize in clojure)
Even Template method, Strategy and State can be implemented with just passing appropriate function in method.
So, I recommend to not go deep in Design Patterns when you tries yourself in functional style but reading some books about functional concepts (high-order functions, laziness, currying, and so on)