Interface in a dynamic language? - interface

Interface (or an abstract class with all the methods abstract) is a powerful weapon in a static-typed language such as C#, JAVA. It allows different derived types to be used in a uniformed way. Design patterns encourage us to use interface as much as possible.
However, in a dynamic-typed language, all objects are not checked for their type at compile time. They don't have to implement an interface to be used in a specific way. You just need to make sure that they have some methods (attributes) defined. This makes interface not necessary, or at least not as useful as it is in a static language.
Does a typical dynamic language (e.g. ruby) have interface? If it does, then what are the benefits of having it? If it doesn't, then are we losing many of the beautiful design patterns that require an interface?
Thanks.

I guess there is no single answer for all dynamic languages. In Python, for instance, there are no interfaces, but there is multiple inheritance. Using interface-like classes is still useful:
Interface-like classes can provide default implementation of methods;
Duck-typing is good, but to an extent; sometimes it is useful to be able to write isinstance(x, SomeType), especially when SomeType contains many methods.

Interfaces in dynamic languages are useful as documentation of APIs that can be checked automatically, e.g. by development tools or asserts at runtime.
As an example, zope.interface is the de-facto standard for interfaces in Python. Projects such as Zope and Twisted that expose huge APIs for consumption find it useful, but as far as I know it's not used much outside this type of projects.

In Ruby, which is a dynamically-typed language and only allows single inheritance, you can mimic an "interface" via mixins, rather than polluting the class with the methods of the "interface".
Mixins partially mimic multiple inheritance, allowing an object to "inherit" from multiple sources, but without the ambiguity and complexity of actually having multiple parents. There is only one true parent.
To implement an interface (in the abstract sense, not an actual interface type as in statically-typed languages) You define a module as if it were an interface in a static language. You then include it in the class. Voila! You've gathered the duck type into what is essentially an interface.
Very simplified example:
module Equippable
def weapon
"broadsword"
end
end
class Hero
include Equippable
def hero_method_1
end
def hero_method_2
end
end
class Mount
include Equippable
def mount_method_1
end
end
h = Hero.new
h.weapon # outputs "broadsword"
m = Mount.new
m.weapon # outputs "broadsword"
Equippable is the interface for Hero, Mount, and any other class or model that includes it.
(Obviously, the weapon will most likely be dynamically set by an initializer, which has been simplified away in this example.)

Related

PIMPL Patterns in "high-level" languages - Possible/Applicable?

In the C++ world, there are two well-known strategies for maintaining binary compatibility with a library:
Interfaces: all public classes are "interface" classes (only pure virtual methods, no members), which are implemented by private subclasses in the library
PIMPL pattern: all public classes hold a single member which is a pointer to a forward-declared class, whose definition is private to the library
Both of these achieve binary stability, but #1 comes with some major disadvantages. The primary one, I believe, is that in the library, methods that accept instances of the public interface classes almost always must immediately force-downcast them to the private implementation classes. The use of interfaces incorrectly signals to clients that they are free to supply their own implementations of these interfaces, which if they ever do, will immediately fail on one of these force-downcasts. Unless polymorphism is the goal, the use of interfaces is arguably the wrong design.
Now let's consider "high-level" languages like Java, Kotlin, C# and Swift (and maybe even Typescript and Ruby). We can certainly adopt strategy #1. However this strategy suffers from the same concerns mentioned above.
But what about the PIMPL pattern? There's no such thing as "forward declaration" in these languages, but we can't even separate the class definition and implementation into different files. The compiler does this for us when it creates the package. So does an analogous pattern exist in these languages that "hides" the private details in the sense that it lets us freely modify private details without breaking binary compatibility?
Which leads to the next question...
Is it even necessary to begin with to "hide" class innards to achieve binary stability in those languages? This is necessary in C++ because of its on-stack value semantics, which makes compiled client code sensitive to the memory size of the library's classes. But to my knowledge, class instances in the "high-level" languages aren't moved around on the call stack, and instead work more like pointers/references would in C++, which may render the concern moot. If that's true, we can simply write classes "naively", and be sure that the binary compatibility remains stable as long as we don't mess with public methods/members. We could, however, do whatever we wish with private members, even if it entails changing the memory size of the public classes, and it wouldn't force client code to be recompiled.
So, in summary: are PIMPL patterns possible in these languages, or does the concept not even apply because there's no problem of private details "leaking" into the binary interface to begin with?

If you have Traits, do you stop using interfaces, Abstract base classes, and multiple inheritance?

It seems like Traits could completely replace interfaces, abstract base classes, mixins, and multiple inheritance, leaving you with just Traits and concrete inheritance.
Is this the intent?
If you have traits, which of the other code structuring constructs should you use?
(Roles are the Perl name for Traits.)
At least for Perl's Moose, there are no interfaces, so roles clearly subsume those, and generally mixins too. I'd say there still could be a case for abstract base classes. Roles can be considered what objects do, where classes are what they are.
By this line of reasoning, there still might be a valid use for an abstract base class. A URL is one example. There could easily be an abstract base class for a URL. An IO stream might be different, perhaps better as a role, as it defines how things behave rather than what they are.
When using roles, however, I have yet to see any clear need for true multiple inheritance from more than one class.
I have no use for interfaces or abstract classes at this point, but mixins and multiple inheritance are really enabled by traits so the usage of those paradigms is strongly encouraged here. Check the entire collection library to see the very rich classes you can build using these ideas.
Ah, my comments reflect Scala - I didn't realize you tagged this with multiple languages.
When you instanciate a trait; it consumes one classe.
So regardless of expressivity; You may still use legacy construct for preventing classes explosion in your jar (and starting time).
I let others answer about expressivity :)
I'm only talking about Scala here...
Read this.

Mixins vs composition in scala

In java world (more precisely if you have no multiple inheritance/mixins) the rule of thumb is quite simple: "Favor object composition over class inheritance".
I'd like to know if/how it is changed if you also consider mixins, especially in scala?
Are mixins considered a way of multiple inheritance, or more class composition?
Is there also a "Favor object composition over class composition" (or the other way around) guideline?
I've seen quite some examples when people use (or abuse) mixins when object composition could also do the job and I'm not always sure which one is better. It seems to me that you can achieve quite similar things with them, but there are some differences also, some examples:
visibility - with mixins everything becomes part of the public api, which is not the case with composition.
verbosity - in most cases mixins are less verbose and a bit easier to use, but it's not always the case (e.g. if you also use self types in complex hierarchies)
I know the short answer is "It depends", but probably there are some typical situation when this or that is better.
Some examples of guidelines I could come up with so far (assuming I have two traits A and B and A wants to use some methods from B):
If you want to extend the API of A with the methods from B then mixins, otherwise composition. But it does not help if the class/instance that I'm creating is not part of a public API.
If you want to use some patterns that need mixins (e.g. Stackable Trait Pattern) then it's an easy decision.
If you have circular dependencies then mixins with self types can help. (I try to avoid this situation, but it's not always easy)
If you want some dynamic, runtime decisions how to do the composition then object composition.
In many cases mixins seem to be easier (and/or less verbose), but I'm quite sure they also have some pitfalls, like the "God class" and others described in two artima articles: part 1, part 2 (BTW it seems to me that most of the other problems are not relevant/not so serious for scala).
Do you have more hints like these?
A lot of the problems that people have with mix-ins can be averted in Scala if you only mix-in abstract traits into your class definitions, and then mix in the corresponding concrete traits at object instantiation time. For instance
trait Locking{
// abstract locking trait, many possible definitions
protected def lock(body: =>A):A
}
class MyService{
this:Locking =>
}
//For this time, we'll use a java.util.concurrent lock
val myService:MyService = new MyService with JDK15Locking
This construct has several things to recommend it. First, it prevents there from being an explosion of classes as different combinations of trait functionalities are needed. Second, it allows for easy testing, as one can create and mix-in "do-nothing" concrete traits, similar to mock objects. Finally, we've completely hidden the locking trait used, and even that locking is going on, from consumers of our service.
Since we've gotten past most of the claimed drawbacks of mix-ins, we're still left with a tradeoff
between mix-in and composition. For myself, I normally make the decision based on whether a hypothetical delegate object would be entirely encapsulated by the containing object, or whether it could potentially be shared and have a lifecycle of its own. Locking provides a good example of entirely encapsulated delegates. If your class uses a lock object to manage concurrent access to its internal state, that lock is entirely controlled by the containing object, and neither it nor its operations are advertised as part of the class's public interface. For entirely encapsulated functionality like this, I go with mix-ins. For something shared, like a datasource, use composition.
Other differences you haven't mentioned:
Trait classes do not have any independent existence:
(Programming Scala)
If you find that a particular trait is used most often as a parent of other classes, so that the child classes behave as the parent trait, then consider defining the trait as a class instead, to make this logical relationship more clear.
(We said behaves as, rather than is a, because the former is the more precise definition of inheritance, based on the Liskov Substitution Principle - see [Martin2003], for example.)
[Martin2003]: Robert C. Martin, Agile Software Development: Principles, Patterns, and Practices, Prentice-Hall, 2003
mixins (trait) have no constructor parameters.
Hence the advice, still from Programming Scala:
Avoid concrete fields in traits that can’t be initialized to suitable default values.
Use abstract fields instead or convert the trait to a class with a constructor.
Of course, stateless traits don’t have any issues with initialization.
It’s a general principle of good object-oriented design that an instance should always be in a known valid state, starting from the moment the construction process finishes.
That last part, regarding the initial state of an object, has often helped decide between class (and class composition) and trait (and mixins) for a given concept.

What exactly is a Class Factory?

I see the word thrown around often, and I may have used it myself in code and libraries over time, but I never really got it. In most write-ups I came across, they just went on expecting you to figure it out.
What is a Class Factory? Can someone explain the concept?
Here's some supplemental information that may help better understand several of the other shorter, although technically correct, answers.
In the strictest sense a Class Factory is a function or method that creates or selects a class and returns it, based on some condition determined from input parameters or global context. This is required when the type of object needed can't be determined until runtime. Implementation can be done directly when classes are themselves objects in the language being used, such as Python.
Since the primary use of any class is to create instances of itself, in languages such as C++ where classes are not objects that can be passed around and manipulated, a similar result can often be achieved by simulating "virtual constructors", where you call a base-class constructor but get back an instance of some derived class. This must be simulated because constructors can't really be virtual✶ in C++, which is why such object—not class—factories are usually implemented as standalone functions or static methods.
Although using object-factories is a simple and straight-forward scheme, they require the manual maintenance of a list of all supported types in the base class' make_object() function, which can be error-prone and labor-intensive (if not over-looked). It also violates encapsulation✶✶ since a member of base class must know about all of the base's concrete descendant classes (now and in the future).
✶ Virtual functions are normally resolved "late" by the actual type of object referenced, but in the case of constructors, the object doesn't exist yet, so the type must be determined by some other means.
✶✶ Encapsulation is a property of the design of a set of classes and functions where the knowledge of the implementation details of a particular class or function are hidden within it—and is one of the hallmarks of object-oriented programming.
Therefore the best/ideal implementations are those that can handle new candidate classes automatically when they're added, rather than having only a certain finite set currently hardcoded into the factory (although the trade-off is often deemed acceptable since the factory is the only place requiring modification).
James Coplien's 1991 book Advanced C++: Programming Styles and Idioms has details on one way to implement such virtual generic constructors in C++. There are even better ways to do this using C++ templates, but that's not covered in the book which predates their addition to the standard language definition. In fact, C++ templates are themselves class factories since they instantiate a new class whenever they're invoked with different actual type arguments.
Update: I located a 1998 paper Coplien wrote for EuroPLoP titled C++ Idioms where, among other things, he revises and regroups the idioms in his book into design-pattern form à la the 1994 Design Patterns: Elements of Re-Usable Object-Oriented Software book. Note especially the Virtual Constructor section (which uses his Envelope/Letter pattern structure).
Also see the related answers here to the question Class factory in Python as well as the 2001 Dr. Dobb's article about implementing them with C++ Templates titled Abstract Factory, Template Style.
A class factory constructs instances of other classes. Typically, the classes they create share a common base class or interface, but derived classes are returned.
For example, you could have a class factory that took a database connection string and returned a class implementing IDbConnection such as SqlConnection (class and interface from .Net)
A class factory is a method which (according to some parameters for example) returns you a customised class (not instantiated!).
The Wikipedia article gives a pretty good definition: http://en.wikipedia.org/wiki/Factory_pattern
But probably the most authoritative definition would be found in the Design Patterns book by Gamma et al. (commonly called the Gang of Four Book).
I felt that this explains it pretty well (for me, anyway). Class factories are used in the factory design pattern, I think.
Like other creational patterns, it [the factory design pattern]
deals with the problem of creating
objects (products) without specifying
the exact class of object that will be
created. The factory method design
pattern handles this problem by
defining a separate method for
creating the objects, which subclasses
can then override to specify the
derived type of product that will be
created. More generally, the term
factory method is often used to refer
to any method whose main purpose is
creation of objects.
http://en.wikipedia.org/wiki/Factory_method_pattern
Apologies if you've already read this and found it to be insufficient.

Why do dynamic languages like Ruby and Python not have the concept of interfaces like in Java or C#?

To my surprise as I am developing more interest towards dynamic languages like Ruby and Python. The claim is that they are 100% object oriented but as I read on several basic concepts like interfaces, method overloading, operator overloading are missing. Is it somehow in-built under the cover or do these languages just not need it? If the latter is true are, they 100% object oriented?
EDIT: Based on some answers I see that overloading is available in both Python and Ruby, is it the case in Ruby 1.8.6 and Python 2.5.2 ??
Dynamic languages use duck typing.
Any code can call methods on any object that support those methods, so the concept
of interfaces is extraneous.
Python does in fact support operator overloading(check - 3.3. Special method names) , as does Ruby.
Anyway, you seem to be focusing on aspects that are not essential to object oriented programming. The main focus is on concepts like encapsulation, inheritance, and polymorphism, which are 100% supported in Python and Ruby.
Thanks to late binding, they do not need it. In Java/C#, interfaces are used to declare that some class has certain methods and it is checked during compile time; in Python, whether a method exists is checked during runtime.
Method overloading in Python does work:
>>> class A:
... def foo(self):
... return "A"
...
>>> class B(A):
... def foo(self):
... return "B"
...
>>> B().foo()
'B'
Are they object-oriented? I'd say yes. It's more of an approach thing rather than if any concrete language has feature X or feature Y.
I can only speak for python, but there have been proposals for interfaces as well as home-written interface examples in the past.
However, the way python works with objects dynamically tends to reduce the need for (and the benefit of) interfaces to some extent.
With a dynamic language, your type binding happens at runtime - interfaces are mostly used for compile time constraints on objects - if this happens at runtime, it eliminates some of the need for interfaces.
name based polymorphism
"For those of you unfamiliar with Python, here's a quick intro to name-based polymorphism. Python objects have an internal dictionary that contains a string for every attribute and method. When you access an attribute or method in Python code, Python simply looks up the string in the dict. Therefore, if what you want is a class that works like a file, you don't need to inherit from file, you just create a class that has the file methods that are needed.
Python also defines a bunch of special methods that get called by the appropriate syntax. For example, a+b is equivalent to a.add(b). There are a few places in Python's internals where it directly manipulates built-in objects, but name-based polymorphism works as you expect about 98% of the time. "
Python does provide operator overloading, e.g. you can define a method __add__ if you want to overload +.
You typically don't need to provide method overloading, since you can pass arbitrary parameters into a single method. In many cases, that single method can have a single body that works for all kinds of objects in the same way. If you want to have different code for different parameter types, you can inspect the type, or double-dispatch.
Interfaces are mostly unnecessary because of duck typing, as rossfabricant points out. A few remaining cases are covered in Python by ABCs (abstract base classes) or Zope interfaces.