I recently wrote some code like the block below and it left me with thoughts that the design could be improved if I was more knowledgeable on functional programming abstractions.
sealed trait Foo
case object A extends Foo
case object B extends Foo
case object C extends Foo
.
.
.
object Foo {
private def someFunctionSemanticallyRelatedToA() = { // do stuff }
private def someFunctionSemanticallyRelatedToB() = { // do stuff }
private def someFunctionSemanticallyRelatedToC() = { // do stuff }
.
.
.
def somePublicFunction(x : Foo) = x match {
case A => someFunctionSemanticallyRelatedToA()
case B => someFunctionSemanticallyRelatedToB()
case C => someFunctionSemanticallyRelatedToC()
.
.
.
}
}
My questions are:
Is the somePublicFunction() suffering from code smell or even the whole design? My concern is that the list of value constructors could grow quite big.
Is there a better FP abstraction to handle this type of design more elegantly or even concisely?
You've just run into the expression problem. In your code sample, the problem is that potentially every time you add or remove a case from your Foo algebraic data type, you'll need to modify every single match (like in somePublicFunction) against values of Foo. In Nimrand's answer, the problem is in the opposite end of the spectrum: you can add or remove cases from Foo easily, but every time you want to add or remove a behaviour (a method), you'll need to modify every subclass of Foo.
There are various proposals to solve the expression problem, but one interesting functional way is Oleg Kiselyov's Typed Tagless Final Interpreters, which replaces each case of the algebraic data type with a function that returns some abstract value that's considered to be equivalent to that case. Using generics (i.e. type parameters), these functions can all have compatible types and work with each other no matter when they were implemented. E.g., I've implemented an example of building and evaluating an arithmetic expression tree using TTFI: https://github.com/yawaramin/scala-ttfi
Your explanation is a bit too abstract to give you a confident answer. However, if the list of subclasses of Foo is likely to grow/change in the future, I would be inclined to make it an abstract method of Foo, and then implement the logic for each case in the sub classes. Then you just call Foo.myAbstractMethod() and polymorphism handles everything neatly.
This keeps the code specific to each object with the object itself, which is keeps things more neatly organized. It also means that you can add new subclasses of Foo without having to jump around to multiple places in code to augment the existing match statements elsewhere in the code.
Case classes and pattern-matching work best when the set of sub-classes is relatively small and fixed. For example, Option[T] there are only two sub-classes, Some[T] and None. That will NEVER change, because to change that would be to fundamentally change what Option[T] represents. Therefore, it's a good candidate for pattern-matching.
Suppose that I have to implement two different interfaces declared in two different packages (in two different separated projects).
I have in the package A
package A
type interface Doer {
Do() string
}
func FuncA(Doer doer) {
// Do some logic here using doer.Do() result
// The Doer interface that doer should implement,
// is the A.Doer
}
And in package B
package B
type interface Doer {
Do() string
}
function FuncB(Doer doer) {
// some logic using doer.Do() result
// The Doer interface that doer should implement,
// is the B.Doer
}
In my main package
package main
import (
"path/to/A"
"path/to/B"
)
type C int
// this method implement both A.Doer and B.Doer but
// the implementation of Do here is the one required by A !
func (c C) Do() string {
return "C now Imppement both A and B"
}
func main() {
c := C(0)
A.FuncA(c)
B.FuncB(c) // the logic implemented by C.Do method will causes a bug here !
}
How to deal with this situation ?
As the FAQ mentions
Experience with other languages told us that having a variety of methods with the same name but different signatures was occasionally useful but that it could also be confusing and fragile in practice.
Matching only by name and requiring consistency in the types was a major simplifying decision in Go's type system.
In your case, you would satisfy both interfaces.
You can you the can test whether an object (of an interface type) satisfies another interface type A.Doer, by doing:
if _, ok := obj.(A.Doer); ok {
}
The OP adds:
But the logic implemented in the Do method to satisfy A is completely different from the one in B.
Then you need to implement a wrapper around you object:
a DoerA, which has your object C as a field, and implement A.Do() in a manner that satisfy how A.Do()is supposed to work
a DoerB, which has your same object C as a field, and implement B.Do() in a manner that satisfy how B.Do() is supposed to work
That way, you will know which Doer to pass to a function expecting an A.Doer or a B.Doer.
You won't have to implement a Do() method on your original object C, which would be unable to cope with the different logic of A.Do() and B.Do().
By definition, you are satisfying both:
A Go type satisfies an interface by implementing the methods of that interface, nothing more. This property allows interfaces to be defined and used without having to modify existing code. It enables a kind of structural typing that promotes separation of concerns and improves code re-use, and makes it easier to build on patterns that emerge as the code develops. The semantics of interfaces is one of the main reasons for Go's nimble, lightweight feel.
So, with that in mind, you can either:
a) Add comments to the the interface methods defining your expectations on the logic (see the io.Reader interface or a good example)
b) Add an extra method called ImplementsDoerA and ImplementsDoerB on the interfaces (also mentioned in the FAQ).
I have the following problem (programming language: scala):
A derived class D must compute base class parameters with a somewhat lengthy computation
from its own parameters:
class D(a:Int,b:Int) extends B(f(a,b)) {...}
Now I seem to have 3 possibilities:
(a) put the body of f(a,b) directly in the constructor of B: B({...})
(b) define f as a static function in the companion object of D
(c) define f as a member function of D which happens to be static
(a) Seems to be very inelegant.
Are these approaches even legal?
What would you do?
I am asking the question since I am starting out with scala and our organisation does not actually have scala installed yet so I can do experiments only intermittently at home.
Thanks for all replies.
Implement apply method in the companion object which transform constructor params in appropriate way
object B {
def f(a: Int) = a
def apply(a: Int, b: Int) = new B(f(a), f(b))
}
If you need always transform constructor arguments think about correctness of your API. I suppose code review helps to make this code better.
I would use a) if I didn't need to reuse the function elsewhere. The function has to go somewhere, why not where it's actually being used? But this is mostly an aesthetic choice; b) is fine too. If it's a function that needs to be called from elsewhere then I'd go with b). I wouldn't use c); whether or not it's legal it's certainly misleading and likely to cause maintenance problems when a future editor assumes this is a normal member function.
There are some frameworks that fully embraces the typeclass pattern. scalaz and shapeless would be good examples. So there are certainly some cases where typeclasses are preferable over normal java classes and polymorphism.
I awe implicit evidence expression power and I'm curious why this method suffer a shortage of practical applications. What reasons compel scala programmers to use basic classes. The typeclasses obviously cost in verbosity and run-time, but is there any other reason?
I came to scala without prior java experience and wonder if I've missed some essential benefits that classic scala-java classes may give.
I'm searching for some spectacular use cases showing areas where typeclasses are insufficient or ineffective.
Typeclasses and inheritance enable reuse in different ways. Inheritance excels at providing correct functionality for changed internals.
class Foo { def foo: String = "foo" }
def fooUser(foo: Foo) { println(foo.foo) }
class Bar extends Foo {
private var annotation = List.empty[String]
def annotate(s: String) { annotation = s :: annotation }
override def foo = ("bar" :: annotation.map("#" + _)).mkString(" ")
}
Now, everyone who uses Foo will be able to get the correct value if you give them a Bar, even if they only know that the type is a Foo. You don't have to have anticipated that you might want pluggable functionality (except by not labeling foo final). You don't need to keep track of the type or keep passing a witness instance forwards; you just use Bar wherever you want in place of Foo and it does the right thing. This is a big deal. If you want a fixed interface with easily-modifiable functionality under the hood, inheritance is your thing.
In contrast, inheritance is not so great when you have a fixed set of data types with easily-modifiable interface. Sorting is a great example. Suppose you want to sort Foo. If you try
class Foo extends Sortable[Foo] {
def lt(you: Foo) = foo < you.foo
def foo = "foo"
}
you could pass this to anything that could sort a Sortable. But what if you want to sort by length of name not with the standard sort? Well,
class Foo extends LexicallySortable[Foo] with LengthSortable[Foo] {
def lexicalLt(you: Foo) = foo < you.foo
def lengthLt(you: Foo) = foo.length < you.foo.length
def foo = "foo"
}
This rapidly becomes hopeless, especially since you have to hunt down all subclasses of Foo and make sure they are updated properly. You are much better off deferring the less-than computation to a typeclass which you can swap out as needed. (Or to a regular class, which you must always reference explicitly.) This kind of automatically-selected functionality is also a big deal.
You can't really replace one with the other. When you need to easily incorporate new kinds of data to a fixed interface, use inheritance. When you need a few kinds of underlying data but need to easily supply new functionality, use type classes. When you need both, you will have a lot of work to do whichever way you go about it, so use to taste.
As I understand from this blog post "type classes" in Scala is just a "pattern" implemented with traits and implicit adapters.
As the blog says if I have trait A and an adapter B -> A then I can invoke a function, which requires argument of type A, with an argument of type B without invoking this adapter explicitly.
I found it nice but not particularly useful. Could you give a use case/example, which shows what this feature is useful for ?
One use case, as requested...
Imagine you have a list of things, could be integers, floating point numbers, matrices, strings, waveforms, etc. Given this list, you want to add the contents.
One way to do this would be to have some Addable trait that must be inherited by every single type that can be added together, or an implicit conversion to an Addable if dealing with objects from a third party library that you can't retrofit interfaces to.
This approach becomes quickly overwhelming when you also want to begin adding other such operations that can be done to a list of objects. It also doesn't work well if you need alternatives (for example; does adding two waveforms concatenate them, or overlay them?) The solution is ad-hoc polymorphism, where you can pick and chose behaviour to be retrofitted to existing types.
For the original problem then, you could implement an Addable type class:
trait Addable[T] {
def zero: T
def append(a: T, b: T): T
}
//yup, it's our friend the monoid, with a different name!
You can then create implicit subclassed instances of this, corresponding to each type that you wish to make addable:
implicit object IntIsAddable extends Addable[Int] {
def zero = 0
def append(a: Int, b: Int) = a + b
}
implicit object StringIsAddable extends Addable[String] {
def zero = ""
def append(a: String, b: String) = a + b
}
//etc...
The method to sum a list then becomes trivial to write...
def sum[T](xs: List[T])(implicit addable: Addable[T]) =
xs.FoldLeft(addable.zero)(addable.append)
//or the same thing, using context bounds:
def sum[T : Addable](xs: List[T]) = {
val addable = implicitly[Addable[T]]
xs.FoldLeft(addable.zero)(addable.append)
}
The beauty of this approach is that you can supply an alternative definition of some typeclass, either controlling the implicit you want in scope via imports, or by explicitly providing the otherwise implicit argument. So it becomes possible to provide different ways of adding waveforms, or to specify modulo arithmetic for integer addition. It's also fairly painless to add a type from some 3rd-party library to your typeclass.
Incidentally, this is exactly the approach taken by the 2.8 collections API. Though the sum method is defined on TraversableLike instead of on List, and the type class is Numeric (it also contains a few more operations than just zero and append)
Reread the first comment there:
A crucial distinction between type classes and interfaces is that for class A to be a "member" of an interface it must declare so at the site of its own definition. By contrast, any type can be added to a type class at any time, provided you can provide the required definitions, and so the members of a type class at any given time are dependent on the current scope. Therefore we don't care if the creator of A anticipated the type class we want it to belong to; if not we can simply create our own definition showing that it does indeed belong, and then use it accordingly. So this not only provides a better solution than adapters, in some sense it obviates the whole problem adapters were meant to address.
I think this is the most important advantage of type classes.
Also, they handle properly the cases where the operations don't have the argument of the type we are dispatching on, or have more than one. E.g. consider this type class:
case class Default[T](val default: T)
object Default {
implicit def IntDefault: Default[Int] = Default(0)
implicit def OptionDefault[T]: Default[Option[T]] = Default(None)
...
}
I think of type classes as the ability to add type safe metadata to a class.
So you first define a class to model the problem domain and then think of metadata to add to it. Things like Equals, Hashable, Viewable, etc. This creates a separation of the problem domain and the mechanics to use the class and opens up subclassing because the class is leaner.
Except for that, you can add type classes anywhere in the scope, not just where the class is defined and you can change implementations. For example, if I calculate a hash code for a Point class by using Point#hashCode, then I'm limited to that specific implementation which may not create a good distribution of values for the specific set of Points I have. But if I use Hashable[Point], then I may provide my own implementation.
[Updated with example]
As an example, here's a use case I had last week. In our product there are several cases of Maps containing containers as values. E.g., Map[Int, List[String]] or Map[String, Set[Int]]. Adding to these collections can be verbose:
map += key -> (value :: map.getOrElse(key, List()))
So I wanted to have a function that wraps this so I could write
map +++= key -> value
The main issue is that the collections don't all have the same methods for adding elements. Some have '+' while others ':+'. I also wanted to retain the efficiency of adding elements to a list, so I didn't want to use fold/map which create new collections.
The solution is to use type classes:
trait Addable[C, CC] {
def add(c: C, cc: CC) : CC
def empty: CC
}
object Addable {
implicit def listAddable[A] = new Addable[A, List[A]] {
def empty = Nil
def add(c: A, cc: List[A]) = c :: cc
}
implicit def addableAddable[A, Add](implicit cbf: CanBuildFrom[Add, A, Add]) = new Addable[A, Add] {
def empty = cbf().result
def add(c: A, cc: Add) = (cbf(cc) += c).result
}
}
Here I defined a type class Addable that can add an element C to a collection CC. I have 2 default implementations: For Lists using :: and for other collections, using the builder framework.
Then using this type class is:
class RichCollectionMap[A, C, B[_], M[X, Y] <: collection.Map[X, Y]](map: M[A, B[C]])(implicit adder: Addable[C, B[C]]) {
def updateSeq[That](a: A, c: C)(implicit cbf: CanBuildFrom[M[A, B[C]], (A, B[C]), That]): That = {
val pair = (a -> adder.add(c, map.getOrElse(a, adder.empty) ))
(map + pair).asInstanceOf[That]
}
def +++[That](t: (A, C))(implicit cbf: CanBuildFrom[M[A, B[C]], (A, B[C]), That]): That = updateSeq(t._1, t._2)(cbf)
}
implicit def toRichCollectionMap[A, C, B[_], M[X, Y] <: col
The special bit is using adder.add to add the elements and adder.empty to create new collections for new keys.
To compare, without type classes I would have had 3 options:
1. to write a method per collection type. E.g., addElementToSubList and addElementToSet etc. This creates a lot of boilerplate in the implementation and pollutes the namespace
2. to use reflection to determine if the sub collection is a List / Set. This is tricky as the map is empty to begin with (of course scala helps here also with Manifests)
3. to have poor-man's type class by requiring the user to supply the adder. So something like addToMap(map, key, value, adder), which is plain ugly
Yet another way I find this blog post helpful is where it describes typeclasses: Monads Are Not Metaphors
Search the article for typeclass. It should be the first match. In this article, the author provides an example of a Monad typeclass.
The forum thread "What makes type classes better than traits?" makes some interesting points:
Typeclasses can very easily represent notions that are quite difficult to represent in the presence of subtyping, such as equality and ordering.
Exercise: create a small class/trait hierarchy and try to implement .equals on each class/trait in such a way that the operation over arbitrary instances from the hierarchy is properly reflexive, symmetric, and transitive.
Typeclasses allow you to provide evidence that a type outside of your "control" conforms with some behavior.
Someone else's type can be a member of your typeclass.
You cannot express "this method takes/returns a value of the same type as the method receiver" in terms of subtyping, but this (very useful) constraint is straightforward using typeclasses. This is the f-bounded types problem (where an F-bounded type is parameterized over its own subtypes).
All operations defined on a trait require an instance; there is always a this argument. So you cannot define for example a fromString(s:String): Foo method on trait Foo in such a way that you can call it without an instance of Foo.
In Scala this manifests as people desperately trying to abstract over companion objects.
But it is straightforward with a typeclass, as illustrated by the zero element in this monoid example.
Typeclasses can be defined inductively; for example, if you have a JsonCodec[Woozle] you can get a JsonCodec[List[Woozle]] for free.
The example above illustrates this for "things you can add together".
One way to look at type classes is that they enable retroactive extension or retroactive polymorphism. There are a couple of great posts by Casual Miracles and Daniel Westheide that show examples of using Type Classes in Scala to achieve this.
Here's a post on my blog
that explores various methods in scala of retroactive supertyping, a kind of retroactive extension, including a typeclass example.
I don't know of any other use case than Ad-hoc polymorhism which is explained here the best way possible.
Both implicits and typeclasses are used for Type-conversion. The major use-case for both of them is to provide ad-hoc polymorphism(i.e) on classes that you can't modify but expect inheritance kind of polymorphism. In case of implicits you could use both an implicit def or an implicit class (which is your wrapper class but hidden from the client). Typeclasses are more powerful as they can add functionality to an already existing inheritance chain(eg: Ordering[T] in scala's sort function).
For more detail you can see https://lakshmirajagopalan.github.io/diving-into-scala-typeclasses/
In scala type classes
Enables ad-hoc polymorphism
Statically typed (i.e. type-safe)
Borrowed from Haskell
Solves the expression problem
Behavior can be extended
- at compile-time
- after the fact
- without changing/recompiling existing code
Scala Implicits
The last parameter list of a method can be marked implicit
Implicit parameters are filled in by the compiler
In effect, you require evidence of the compiler
… such as the existence of a type class in scope
You can also specify parameters explicitly, if needed
Below Example extension on String class with type class implementation extends the class with a new methods even though string is final :)
/**
* Created by nihat.hosgur on 2/19/17.
*/
case class PrintTwiceString(val original: String) {
def printTwice = original + original
}
object TypeClassString extends App {
implicit def stringToString(s: String) = PrintTwiceString(s)
val name: String = "Nihat"
name.printTwice
}
This is an important difference (needed for functional programming):
consider inc:Num a=> a -> a:
a received is the same that is returned, this cannot be done with subtyping
I like to use type classes as a lightweight Scala idiomatic form of Dependency Injection that still works with circular dependencies yet doesn't add a lot of code complexity. I recently rewrote a Scala project from using the Cake Pattern to type classes for DI and achieved a 59% reduction in code size.