Is it possible to have mutual recursive types in scala?
I have a xml files with a list of bugtracker issues. It's raw data. The model has different issue-types like "tasks", "subtasks", "bug", "special-bug".
Now I want to parse my raw-data to a hierarchical structure of tasks and subtasks:
// data type for field contents
abstract class Field
case class Id(raw : string) extends Field
case class Status(raw : string) extends Field
...
// data type for primary model
abstract class Issue(id : String, ...)
case class Task(id : Id, status : Status ..., subtasks : List[Subtask] ) extends Issue(id, ...)
case class Subtask(id : Id, status : Status ..., parent: Task) extends Issue(id, ...)
I wonder if this mutual recursion is theoretically possible?
Second question:
I render the model to some wiki-markup. This works fine with an overloaded recursive render() : String in the class for the datatype. (Probably I should have a "Renderable" Superclass !?)
What would be the cleanest way for parsing, i.e. I'd like to have a recursive
fromXML : scala.xml.Elem => Issue / Field
Where would I put it? How would it look like? IIUC, the companion is autogenerated for case-classes so I cant add to it?
I have this e.g.:
def fromXml(e : Elem) = e match {
case <a>test</a> => Id("test")
case _ => Status("Pre-analysed")
}
But I failed to give the function a type. What is the type of that function ?
I also thought about passing the xml-elem directly to the constructor of the ADT, would that be clever? Or should I separate XML-parsing and model-creation ?
Jesus, after learning the scala basics and doing some scripts and functions (and thinking too much java), I finally understood how to write ADTs and can express myself nearly as in good old Haskell times :-)
A: Regarding the initialisation of immutable, mutually dependent classes, have a look at this question.
B: Regarding your question render: Foo => String function vs. Renderable superclass, this is IMHO more or less a design decision between a functional and an object-oriented approach. I personally don't think that one of them is superior to the other, it is just a matter of taste. The paper "Independently Extensible Solutions to the Expression Problem" has a nice comparison between the two, although in a slightly more elaborate context (it is a great read, however).
C: Companion objects are created by the compiler, but if you also specify a companion object, the compiler will merge the two.
D: In your current class hierarchy there is no non-trivial common supertype for Issue and Field, which makes it difficult to give a meaningful return type for fromXML. You could work with an Either[Issue, Field], though, but this looks fishy to me. In general, I would avoid mixing functions that are supposed to return full-fledged nodes (e.g., Task) with those that return "internal" nodes (e.g., Status).
E: Did you had a look at existing solutions, e.g. scalaxb? You can find more links here and here.
Related
I have a fairly involved ADT representing a small query language (mongodb, to be specific). A simplified version looks a bit like that:
sealed abstract class Query extends Product with Serializable
final case class Eq[A](field: String, value: A) extends Query
final case class And(queries: Seq[Query]) extends Query
case object None extends Query
I've declared Query without a type parameter since not all values actually have one - None, for example, is parameterless.
I also have a type class, DocumentEncoder[A], that lets me turn any A into a BsonDocument.
The problem I'm running into is that Query needs a DocumentEncoder. Declaring one for each alternative is fairly trivial:
Eq[A] writes itself, provided A: DocumentEncoder.
And is very similar, if we assume that Query does have a DocumentEncoder instance.
None simply encodes as the empty BSON document
What I'm struggling with is with writing a global DocumentEncoder[Query]. What I'd usually do is pattern match on each alternative, but in this case I'm stuck with Eq[A]: I'd need to express something like case Eq[A: DocumentEncoder](field, value) => ..., but this is, as far as I know, not possible - pattern matching happens at runtime, implicit resolution at compile time.
The solution I have, which I find very unsatisfactory, is storing a BsonEncoder[A] as a field of Eq[A]. This allows me to write something like:
implicit val queryEncoder: DocumentEncoder[Query] = DocumentEncoder.from {
case e#Eq(field, value) => [...] e.encoder.encode(value) [...]
[...]
}
I can't help but find this horrible, but can't find a more elegant solution. The only other thing I can think of is that my premise (Query should not have a type parameter) is flawed, but:
having a type parameter, how would I go about writing And's type declaration?
is it ok to declare None as a Query[Unit] ?
maybe in my case I could get away with always having a type parameter, but what about a theoretical more generic case where it's not possible?
Alright, ok, so I can think of another solution, but it feels rather like overkill: having Query's type be a type member rather than a type parameter, and declaring a Query.Aux type alias that lifts the type member to a parameter (for implicit resolution). This sorts of feels like a "big boy"'s solution, though - I've seen it used in libraries like shapeless, and I somehow feel like my code or problems aren't yet of a level to require this kind of expert concepts.
Following on form this excellent set of answers on how to define union types in Scala. I've been using the Miles Sabin definition of Union types, but one questions remains.
How do you work with these if the type isn't know until Runtime? For example:
trait inv[-A] {}
type Or[A,B] = {
type check[X] = (inv[A] with inv[B]) <:< inv[X]
}
case class Foo[A : (Int Or String)#check](a: A)
Foo(1) // Foo[Int] = Foo(1)
Foo("hi") // Foo[String] = Foo(hi)
Foo(2.0) // Error!
This example works since the parameter A is know at compile time, and calling Foo(1) is really calling Foo[Int](1). However, what do you do if parameter A isn't known until runtime? Maybe you're paring a file that contains the data for Foo's, in which case the type parameter of Foo isn't know until you read the data. There's no easy way to set parameter A in this case.
The best solutions I've been able to come up with are:
Pattern Match on the data you've read and then create different Foo's based that type. In my case this isn't feasible because my case-class actually contains dozens of union types, so there'd be hundreds of combinations of types to pattern match.
Cast the type you've just read to be (String or Int), so you have a single type to pass around, that passes the Type Class constraint when you create Foo with it. Then return Foo[_] instead. This puts the onus back on the Foo user to work out the type of each field (since they'll appear to be type Any), but at least it defers having to know the type until the field is actually used, in which case a pattern match seems more tractable.
The second solution looks like this:
def parseLine: Any // Parses data point, but can be either a String or
// Int, so returns Any.
def mkFoo: Foo[_] = {
val a = parseLine.asInstanceOf[Int with String]
Foo(a) // Passes type constraint now
}
In practice I've ended up using the second solution, but I'm wondering if there's something better I can do?
Another way to state the problem is: What does it mean to return a Union Type? Functions can only return a single type, and the trickery we use with Miles Sabin union types is only useful for the types you pass in, not for the types you return.
PS. For context, why this is a problem in my case is that I'm generating a set of case-classes from a Json schema file. Json naturally supports union types, so I would like to make my case classes reflect that too. This works great in one direction: users creating case-classes to be serialized out to Json. But gets sticky in the other direction: user's parsing Json files to have a set of populated case classes returned to them.
The "standard" Scala solution to this problem is to use an ordinary discriminated-union type (ie, to forego true union types altogether):
sealed trait Foo
case class IntFoo(x: Int) extends Foo
case class StringFoo(x: String) extends Foo
This reflects the fact that, as you observe, the particular type of the member is a runtime value; the JVM type-tag of the Foo instance provides this runtime value.
Miles Sabin's implementation of union types is very clever, but I'm not sure if it provides any practical benefit, because it only restricts the type of thing that can go into a Foo, but provides the user of a Foo with no computable version of that restriction, in the way a match provides you with a computable version of the sealed trait. In general, for a restriction to be useful, it needs two sides: a check that only the right things are put in, and an extractor (aka an eliminator) that allows the same right things to come out the other end.
Perhaps if you gave some explanation of why you're looking for a purer union type it would illuminate whether regular discriminated unions are sufficient or if you really need something more.
There's a reason every JSON parser for Scala requires well defined types into which the JSON will be converted, even if some fields have to be dropped: you cannot work with something you don't know the type of.
To given an example, say you have a, and maybe a is a String, maybe it's an Int, but you don't know what it is. Why computation could you possibly make with a, not knowing its type? Why would your code compute the sum of all a's, for instance, if you didn't know in advance it was a number?
Generally, the answer to that is to perform user-provided data manipulation at runtime over data with unknown characteristics, as the user itself sees that it's a number and decides they want to know what the sum of that field is. Fine, but you are going the wrong way about it if so.
There is a well defined way to represent JSON data in Scala (and, for that matter, any data that has the same characteristics as JSON. Which is using a hierarchy of classes. A json value may be a json object, array or one of a number of primitives. A json object contains a list of key/value pairs, whose keys are json strings and values are json values. And so on. This is easy to represent, and there are many library doing so already. In fact, there are so many that there's a project called Json4s which presents a unified API which can be used and is implemented by many of the aforementioned libraries.
Things like the records which Miles Sabin's Shapeless library provide are intended to be used when the input doesn't have a well defined schema, but the program knows what it needs from that input. And, yes, the program might know what to do with a if it is an Int or a String, but not every possible value.
The next Scala 3 (mid 2020) based on Dotty will implement the proposal for Union Type from last Sept. 2018
You see it in "a tour of Scala 3" (June 2019)
Union Types Provide ad-hoc combinations of types
Subsetting = Subtyping
No boxing overhead
case class UserName(name: String)
case class Password(hash: Hash)
def help(id: UserName | Password) = {
val user = id match {
case UserName(name) => lookupName(name)
case Password(hash) => lookupPassword(hash)
}
...
}
Union Types Work also with singleton types
Great for JS interop
type Command = "Click" | "Drag" | "KeyPressed"
def handleEvent(kind: Command) = kind match {
case "Click" => MouseClick()
case "Drag" => MoveTo()
case "KeyPressed" => KeyPressed()
}
I've read some explanation of Algebraic Data Types:
The Algebra of Algebraic Data types I
The Algebra of Algebraic Data types II
The Algebra of Algebraic Data types III
The Algebra of Data, and Calculus of Mutation
These articles give very detail description and code samples.
At first I was thinking Algebraic Data Type was just for defining some types easily and we can match them with pattern matching. But after reading these articles, I found "pattern matching" is not even mentioned there, and the content looks interesting but far more complex than I expected.
So I have some questions (which are not answered in these articles):
Why do we need it, say, in Haskell or Scala?
What we can do if we have it, and what we can't do if we don't have it?
We should start with the Haskell wiki article Algebraic Data Types
And here, shortly, just my vision:
we need them to model business logic in old object oriented way (or actually in old categories-based way), and make it more typesafe as compiler can check that you've matched all possible choices. Or, in other words, that your function is total, not partial. Eventually, it gives the compiler an ability to proof the correctness of your code (that's why sealed traits are recommended). So, the more different types you have the better - btw, generic programming helps you here as it produces more types.
standard features: we can represent type as a "set" of objects, we can match object with type (using pattern matching), or even deconstruct (analyze) it with matchers. We can also dynamically add behavior (in compile-time) to such type with type classess. It's possible for regular class as well, but here it gives us an ability to separate algebraic model (types) from behavior (functions)
we can construct types as products/coproducts of different objects/types. You can think of algebraic type system as a set (or more generally - as a cartesian-closed category). type Boolean = True | False means that Boolean is a union (coproduct) of True and False. Cat(height: Height, weight: Weight) is a "tuple" (more generally - product) of Height and Weight. Product (more-less) represents "part of" relationship from OOP, coproduct - "is" (but in the opposite way).
It also gives us a way to dispatch behaviour in runtime in multimethod-style (like it was in CLOS):
sealed trait Animal
case class Cat extends Animal
case class Dog extends Animal
def move(c: Animal) = c match {
case Cat(...) => ...
case Dog(...) =>
case a: Animal => ...//if you need default implementation
}
Haskell-like:
data Animal = Dog | Cat //coproduct
let move Dog d = ...
let move Cat c = ...
Instead of:
trait Animal {
...
def move = ...
}
class Cat(val ...) extends Animal {
override def move = ...
}
class Dog(val ...) extends Animal {
override def move = ...
}
P.S. Theoretically, if you're modeling the world in algebraic way and your functions are total and pure - you can proof that your application is correct. If it compiles - it works :).
P.S.2. I should also mention that Scala's type hierarchy which has Any in it isn't so good for typesafety (but good for interop with Java and IO) as it breaks the nice structure defined by GADTs. More than that case class might be both GADT (algebraic) and ADT (abstract) as well, which also reduces the guarantees.
The blog post you mention is more about the mathematical aspects of algebraic data types that about their practical use in programming. I think most functional programmers first learnt about algebraic data types by using them in some programming language, and only later studied their algebraic properties.
Indeed, the intent of the blog post is clear from its very beginning:
In this series of posts I’ll explain why Haskell’s data types are
called algebraic - without mentioning category theory or advanced
math.
Anyway, the practical usefulness of algebraic types is best appreciated by playing around with them.
Suppose, for instance, you want to write a function to intersect two segments on a plane.
def intersect(s1: Segment, s2: Segment): ???
What should the result be? It's tempting to write
def intersect(s1: Segment, s2: Segment): Point
but what if there's no intersection? One might attempt to patch that corner case by returning null, or by throwing a NoIntersection exception. However, two segments might also overlap on more than one point, when they lie on the same straight line. What should we do in such cases? Throwing another exception?
The algebraic types approach is to design a type covering all the cases:
sealed trait Intersection
case object NoIntersection extends Intersection
case class SinglePoint(p: Point) extends Intersection
case class SegmentPortion(s: Segment) extends Intersection
def intersect(s1: Segment, s2: Segment): Intersection
There are many practical cases where such approach feels quite natural. In some other languages lacking algebraic types, one has to resort to exceptions, to nulls (also see the billion-dollar mistake), to non-sealed classes (making it impossible for the compiler to check for exhaustiveness of pattern matching), or to other features provided by the language. Arguably, the "best" option in OOP is to use the Visitor pattern to encode algebraic types and pattern matching in languages which have no such features. Still, having that directly supported in the language, as in scala, is much more convenient.
As far as I understand value classes in Scala are just there to wrap primitive types like Int or Boolean into another type without introducing additional memory usage. So they are basically used as a lightweight alternative to ordinary classes.
That reminds me of Haskell's newtype notation which is also used to wrap existing types in new ones, thus introducing a new interface to some data without consuming additional space (to see the similarity of both languages consider for instance the restriction to one "constructor" with one field both in Haskell and in Scala).
What I am wondering is why the concept of introducing new types that get inlined by the compiler is not generalized to Haskell's approach of having zero-overhead type wrappers for any kind of type. Why did the Scala guys stick to primitive types (aka AnyVal) here?
Or is there already a way in Scala to also define such wrappers for Scala.AnyRef types?
They're not limited to AnyVal.
implicit class RichOptionPair[A,B](val o: Option[(A,B)]) extends AnyVal {
def ofold[C](f: (A,B) => C) = o map { case (a,b) => f(a,b) }
}
scala> Some("fish",5).ofold(_ * _)
res0: Option[String] = Some(fishfishfishfishfish)
There are various limitations on value classes that make them act like lightweight wrappers, but only being able to wrap primitives is not one of them.
The reasoning is documented as Scala Improvement Process (SIP)-15. As Alexey Romanov pointed out in his comment, the idea was to look for an expression using existing keywords that would allow the compiler to determine this situation.
In order for the compiler to perform the inlining, several constraints apply, such as the wrapping class being "ephemeral" (no field or object members, constructor body etc.). Your suggestion of automatically generating inlining classes has at least two problems:
The compiler would need to go through the whole list of constraints for each class. And because the status as value class is implicit, it may flip by adding members to the class at a later point, breaking binary compatibility
More constraints are added by the compiler, e.g. the value class becomes final prohibiting inheritance. So you would have to add these constraints to any class who want to be inlineable that way, and then you gain nothing but extra verbosity.
One could think of other hypothetical constructs, e.g. val class Meter(underlying: Double) { ... }, but the advantage of extends AnyVal IMO is that no syntactic extensions are needed. Also all primitive types are extending AnyVal, so there is a nice analogy (no reference, no inheritance, effective representation etc.)
Recently I read following SO question :
Is there any use cases for employing the Visitor Pattern in Scala?
Should I use Pattern Matching in Scala every time I would have used
the Visitor Pattern in Java?
The link to the question with title:
Visitor Pattern in Scala. The accepted answer begins with
Yes, you should probably start off with pattern matching instead of
the visitor pattern. See this
http://www.artima.com/scalazine/articles/pattern_matching.html
My question (inspired by above mentioned question) is which GOF Design pattern(s) has entirely different implementation in Scala? Where should I be careful and not follow java based programming model of Design Patterns (Gang of Four), if I am programming in Scala?
Creational patterns
Abstract Factory
Builder
Factory Method
Prototype
Singleton : Directly create an Object (scala)
Structural patterns
Adapter
Bridge
Composite
Decorator
Facade
Flyweight
Proxy
Behavioral patterns
Chain of responsibility
Command
Interpreter
Iterator
Mediator
Memento
Observer
State
Strategy
Template method
Visitor : Patten Matching (scala)
For almost all of these, there are Scala alternatives that cover some but not all of the use cases for these patterns. All of this is IMO, of course, but:
Creational Patterns
Builder
Scala can do this more elegantly with generic types than can Java, but the general idea is the same. In Scala, the pattern is most simply implemented as follows:
trait Status
trait Done extends Status
trait Need extends Status
case class Built(a: Int, b: String) {}
class Builder[A <: Status, B <: Status] private () {
private var built = Built(0,"")
def setA(a0: Int) = { built = built.copy(a = a0); this.asInstanceOf[Builder[Done,B]] }
def setB(b0: String) = { built = built.copy(b = b0); this.asInstanceOf[Builder[A,Done]] }
def result(implicit ev: Builder[A,B] <:< Builder[Done,Done]) = built
}
object Builder {
def apply() = new Builder[Need, Need]
}
(If you try this in the REPL, make sure that the class and object Builder are defined in the same block, i.e. use :paste.) The combination of checking types with <:<, generic type arguments, and the copy method of case classes make a very powerful combination.
Factory Method (and Abstract Factory Method)
Factory methods' main use is to keep your types straight; otherwise you may as well use constructors. With Scala's powerful type system, you don't need help keeping your types straight, so you may as well use the constructor or an apply method in the companion object to your class and create things that way. In the companion-object case in particular, it is no harder to keep that interface consistent than it is to keep the interface in the factory object consistent. Thus, most of the motivation for factory objects is gone.
Similarly, many cases of abstract factory methods can be replaced by having a companion object inherit from an appropriate trait.
Prototype
Of course overridden methods and the like have their place in Scala. However, the examples used for the Prototype pattern on the Design Patterns web site are rather inadvisable in Scala (or Java IMO). However, if you wish to have a superclass select actions based on its subclasses rather than letting them decide for themselves, you should use match rather than the clunky instanceof tests.
Singleton
Scala embraces these with object. They are singletons--use and enjoy!
Structural Patterns
Adapter
Scala's trait provides much more power here--rather than creating a class that implements an interface, for example, you can create a trait which implements only part of the interface, leaving the rest for you to define. For example, java.awt.event.MouseMotionListener requires you to fill in two methods:
def mouseDragged(me: java.awt.event.MouseEvent)
def mouseMoved(me: java.awt.event.MouseEvent)
Maybe you want to ignore dragging. Then you write a trait:
trait MouseMoveListener extends java.awt.event.MouseMotionListener {
def mouseDragged(me: java.awt.event.MouseEvent) {}
}
Now you can implement only mouseMoved when you inherit from this. So: similar pattern, but much more power with Scala.
Bridge
You can write bridges in Scala. It's a huge amount of boilerplate, though not quite as bad as in Java. I wouldn't recommend routinely using this as a method of abstraction; think about your interfaces carefully first. Keep in mind that with the increased power of traits that you can often use those to simplify a more elaborate interface in a place where otherwise you might be tempted to write a bridge.
In some cases, you may wish to write an interface transformer instead of the Java bridge pattern. For example, perhaps you want to treat drags and moves of the mouse using the same interface with only a boolean flag distinguishing them. Then you can
trait MouseMotioner extends java.awt.event.MouseMotionListener {
def mouseMotion(me: java.awt.event.MouseEvent, drag: Boolean): Unit
def mouseMoved(me: java.awt.event.MouseEvent) { mouseMotion(me, false) }
def mouseDragged(me: java.awt.event.MouseEvent) { mouseMotion(me, true) }
}
This lets you skip the majority of the bridge pattern boilerplate while accomplishing a high degree of implementation independence and still letting your classes obey the original interface (so you don't have to keep wrapping and unwrapping them).
Composite
The composite pattern is particularly easy to achieve with case classes, though making updates is rather arduous. It is equally valuable in Scala and Java.
Decorator
Decorators are awkward. You usually don't want to use the same methods on a different class in the case where inheritance isn't exactly what you want; what you really want is a different method on the same class which does what you want instead of the default thing. The enrich-my-library pattern is often a superior substitute.
Facade
Facade works better in Scala than in Java because you can have traits carry partial implementations around so you don't have to do all the work yourself when you combine them.
Flyweight
Although the flyweight idea is as valid in Scala as Java, you have a couple more tools at your disposal to implement it: lazy val, where a variable is not created unless it's actually needed (and thereafter is reused), and by-name parameters, where you only do the work required to create a function argument if the function actually uses that value. That said, in some cases the Java pattern stands unchanged.
Proxy
Works the same way in Scala as Java.
Behavioral Patterns
Chain of responsibility
In those cases where you can list the responsible parties in order, you can
xs.find(_.handleMessage(m))
assuming that everyone has a handleMessage method that returns true if the message was handled. If you want to mutate the message as it goes, use a fold instead.
Since it's easy to drop responsible parties into a Buffer of some sort, the elaborate framework used in Java solutions rarely has a place in Scala.
Command
This pattern is almost entirely superseded by functions. For example, instead of all of
public interface ChangeListener extends EventListener {
void stateChanged(ChangeEvent e)
}
...
void addChangeListener(ChangeListener listener) { ... }
you simply
def onChange(f: ChangeEvent => Unit)
Interpreter
Scala provides parser combinators which are dramatically more powerful than the simple interpreter suggested as a Design Pattern.
Iterator
Scala has Iterator built into its standard library. It is almost trivial to make your own class extend Iterator or Iterable; the latter is usually better since it makes reuse trivial. Definitely a good idea, but so straightforward I'd hardly call it a pattern.
Mediator
This works fine in Scala, but is generally useful for mutable data, and even mediators can fall afoul of race conditions and such if not used carefully. Instead, try when possible to have your related data all stored in one immutable collection, case class, or whatever, and when making an update that requires coordinated changes, change all things at the same time. This won't help you interface with javax.swing, but is otherwise widely applicable:
case class Entry(s: String, d: Double, notes: Option[String]) {}
def parse(s0: String, old: Entry) = {
try { old.copy(s = s0, d = s0.toDouble) }
catch { case e: Exception => old }
}
Save the mediator pattern for when you need to handle multiple different relationships (one mediator for each), or when you have mutable data.
Memento
lazy val is nearly ideal for many of the simplest applications of the memento pattern, e.g.
class OneRandom {
lazy val value = scala.util.Random.nextInt
}
val r = new OneRandom
r.value // Evaluated here
r.value // Same value returned again
You may wish to create a small class specifically for lazy evaluation:
class Lazily[A](a: => A) {
lazy val value = a
}
val r = Lazily(scala.util.Random.nextInt)
// not actually called until/unless we ask for r.value
Observer
This is a fragile pattern at best. Favor, whenever possible, either keeping immutable state (see Mediator), or using actors where one actor sends messages to all others regarding the state change, but where each actor can cope with being out of date.
State
This is equally useful in Scala, and is actually the favored way to create enumerations when applied to methodless traits:
sealed trait DayOfWeek
final trait Sunday extends DayOfWeek
...
final trait Saturday extends DayOfWeek
(often you'd want the weekdays to do something to justify this amount of boilerplate).
Strategy
This is almost entirely replaced by having methods take functions that implement a strategy, and providing functions to choose from.
def printElapsedTime(t: Long, rounding: Double => Long = math.round) {
println(rounding(t*0.001))
}
printElapsedTime(1700, math.floor) // Change strategy
Template Method
Traits offer so many more possibilities here that it's best to just consider them another pattern. You can fill in as much code as you can from as much information as you have at your level of abstraction. I wouldn't really want to call it the same thing.
Visitor
Between structural typing and implicit conversion, Scala has astoundingly more capability than Java's typical visitor pattern. There's no point using the original pattern; you'll just get distracted from the right way to do it. Many of the examples are really just wishing there was a function defined on the thing being visited, which Scala can do for you trivially (i.e. convert an arbitrary method to a function).
Ok, let's have a brief look at these patterns. I'm looking at all these patterns purely from a functional programming point of view, and leaving out many things that Scala can improve from an OO point of view. Rex Kerr answer provides an interesting counter-point to my own answers (I only read his answer after writing my own).
With that in mind, I'd like to say that it is important to study persistent data structures (functionally pure data structures) and monads. If you want to go deep, I think category theory basics are important -- category theory can formally describe all program structures, including imperative ones.
Creational Patterns
A constructor is nothing more than a function. A parameterless constructor for type T is nothing more than a function () => T, for example. In fact, Scala's syntactical sugar for functions is taken advantage on case classes:
case class T(x: Int)
That is equivalent to:
class T(val x: Int) { /* bunch of methods */ }
object T {
def apply(x: Int) = new T(x)
/* other stuff */
}
So that you can instantiate T with T(n) instead of new T(n). You could even write it like this:
object T extends Int => T {
def apply(x: Int) = new T(x)
/* other stuff */
}
Which turns T into a formal function, without changing any code.
This is the important point to keep in mind when thinking of creational patterns. So let's look at them:
Abstract Factory
This one is unlikely to change much. A class can be thought of as a group of closely related functions, so a group of closely related functions is easily implemented through a class, which is what this pattern does for constructors.
Builder
Builder patterns can be replaced by curried functions or partial function applications.
def makeCar: Size => Engine => Luxuries => Car = ???
def makeLargeCars = makeCar(Size.Large) _
def makeCar: (Size, Engine, Luxuries) => Car = ???
def makeLargeCars = makeCar(Size.Large, _: Engine, _: Luxuries)
Factory Method
Becomes obsolete if you discard subclassing.
Prototype
Doesn't change -- in fact, this is a common way of creating data in functional data structures. See case classes copy method, or all non-mutable methods on collections which return collections.
Singleton
Singletons are not particularly useful when your data is immutable, but Scala object implements this pattern is a safe manner.
Structural Patterns
This is mostly related to data structures, and the important point on functional programming is that the data structures are usually immutable. You'd be better off looking at persistent data structures, monads and related concepts than trying to translate these patterns.
Not that some patterns here are not relevant. I'm just saying that, as a general rule, you should look into the things above instead of trying to translate structural patterns into functional equivalents.
Adapter
This pattern is related to classes (nominal typing), so it remains important as long as you have that, and is irrelevant when you don't.
Bridge
Related to OO architecture, so the same as above.
Composite
Lot at Lenses and Zippers.
Decorator
A Decorator is just function composition. If you are decorating a whole class, that may not apply. But if you provide your functionality as functions, then composing a function while maintaining its type is a decorator.
Facade
Same comment as for Bridge.
Flyweight
If you think of constructors as functions, think of flyweight as function memoization. Also, Flyweight is intrinsic related to how persistent data structures are built, and benefits a lot from immutability.
Proxy
Same comment as for Adapter.
Behavioral Patterns
This is all over the place. Some of them are completely useless, while others are as relevant as always in a functional setting.
Chain of Responsibility
Like Decorator, this is function composition.
Command
This is a function. The undo part is not necessary if your data is immutable. Otherwise, just keep a pair of function and its reverse. See also Lenses.
Interpreter
This is a monad.
Iterator
It can be rendered obsolete by just passing a function to the collection. That's what Traversable does with foreach, in fact. Also, see Iteratee.
Mediator
Still relevant.
Memento
Useless with immutable objects. Also, its point is keeping encapsulation, which is not a major concern in FP.
Note that this pattern is not serialization, which is still relevant.
Observer
Relevant, but see Functional Reactive Programming.
State
This is a monad.
Strategy
A strategy is a function.
Template Method
This is an OO design pattern, so it's relevant for OO designs.
Visitor
A visitor is just a method receiving a function. In fact, that's what Traversable's foreach does.
In Scala, it can also be replaced with extractors.
I suppose, Command pattern not needed in functional languages at all. Instead of encapsulation command function inside object and then selecting appropriate object, just use appropriate function itself.
Flyweight is just cache, and has default implementation in most functional languages (memoize in clojure)
Even Template method, Strategy and State can be implemented with just passing appropriate function in method.
So, I recommend to not go deep in Design Patterns when you tries yourself in functional style but reading some books about functional concepts (high-order functions, laziness, currying, and so on)