Is polymorphism strictly a type-theory? - interface

I am learning OOP concepts, which do not really have well-established definitions.
I heard different things about polymorphism and can not decide what is right.
Most people will say that it is a type-theory. Meaning that a function is able to accept multiple types of parameters that have something in common.
Ad hoc polymorphism is about different overloads of the same function.
Parametric polymorphism is generic functions.
Subtyping polymorphism is that if a function accepts a certain class as parameter, it can also accept its subclasses. (Of course only those can be passed as parameter which are not abstract but concrete).
There is a seemingly different definition. There are those who say polymorphism means that a function can have different implementations (morphs/forms).
In that sense...
- interface functions,
- abstract classes’ abstract functions,
- and virtual functions that can be overridden by the subtype
...are all considered polymorphic.
As I was told, polymorphism in this sense can be defined as having different results if the same function is called on different objects.
And adding to the confusion, someone said only virtual functions are polymorphic because they already have an implementation.
For me the first way I presented polymorphism and the second seem completely different, but maybe they both fit the definition of polymorphism and it is just me being unable to understand it.
So what is polymorphism in programming? Is it just a type-theory?
In this question I would like to refer to this question:
https://stackoverflow.com/questions/25163683/polymorphism-and-interfaces-clarification#=
It raises almost the same problem, but I could not really make out the conclusion.

Yes and No.
Yes, in classic inherited languages it works that way.
No, since in other languages the calling of a method on an object might be dynamically resolved. (e.g. by runtime code searching in a list of objects as a field, so called aggregate in COM terms)
IOW that that method exists in the type of the object is not defined in type theory. At least not universal. The language might not even be typed.
For a statically inherited object model, it is however true. IOW Typing (subtyping/inheritance, de concept of virtual methods) is an implementation of polymorphism in languages with such object model. But not all languages do.
Some have dispatch polymorphism, and can add methods runtime (like objective C) or figure out of the method exists at all (e.g. COM IDispatch )
The classic test of polymorphism is the "the duck quacks". Where you have a generic "animal" and call a method for "makesound", and if you assigned a duck it "quacks". So you call a method (pass a message in old OO jargon) on an generic object, and you get the behaviour of the more specialized object assigned to it.
What constitutes a "generic" object depends on the language. In statically inherited languages the generic object must have the method declared, sometimes with special modifiers (virtual) to signal overridability.
In other languages the generic object can be the root object, and the runtime will figure out if it has a makesound method.

Related

The object-functional impedance mismatch

In OOP it is good practice to talk to interfaces not to implementations. So, e.g., you write something like this (by Seq I mean scala.collection.immutable.Seq :)):
// talk to the interface - good OOP practice
doSomething[A](xs: Seq[A]) = ???
not something like the following:
// talk to the implementation - bad OOP practice
doSomething[A](xs: List[A]) = ???
However, in pure functional programming languages, such as Haskell, you don't have subtype polymorphism and use, instead, ad hoc polymorphism through type classes. So, for example, you have the list data type and a monadic instance for list. You don't need to worry about using an interface/abstract class because you don't have such a concept.
In hybrid languages, such as Scala, you have both type classes (through a pattern, actually, and not first-class citizens as in Haskell, but I digress) and subtype polymorphism. In scalaz, cats and so on you have monadic instances for concrete types, not for the abstract ones, of course.
Finally the question: given this hybridism of Scala do you still respect the OOP rule to talk to interfaces or just talk to concrete types to take advantage of functors, monads and so on directly without having to convert to a concrete type whenever you need to use them? Put differently, is in Scala still good practice to talk to interfaces even if you want to embrace FP instead of OOP? If not, what if you chose to use List and, later on, you realized that a Vector would have been a better choice?
P.S.: In my examples I used a simple method, but the same reasoning applies to user defined types. E.g.:
case class Foo(bars: Seq[Bar], ...)
What I would attack here is your "concrete vs. interface" concept. Look at it this way: every type has an interface, in the general sense of the term "interface." A "concrete" type is just a limiting case.
So let's look at Haskell lists from this angle. What's the interface of a list? Well, lists are an algebraic data type, and all such data types have the same general form of interface and contract:
You can construct instances of the type using its constructors according to their arities and argument types;
You can observe instances of the type by matching against their constructors according to their arities and argument types;
Construction and observation are inverses—when you pattern match against a value, what you get out is exactly what was put into it.
If you look at it in these terms, I think the following rule works pretty well in either paradigm:
Choose types whose interfaces and contracts match exactly with your requirements.
If their contract is weaker than your requirements, then they won't maintain invariants that you need;
If their contracts are stronger than your requirements, you may unintentionally couple yourself to the "extra" details and limit your ability to change the program later on.
So you no longer ask whether a type is "concrete" or "abstract"—just whether it fits your requirements.
These are my two cents on this subject. In Haskell you have data types (ADTs). You have both lists (linked lists) and vectors (int-indexed arrays) but they don't share a common supertype. If your function takes a list you cannot pass it a vector.
In Scala, being it a hybrid OOP-FP language, you have subtype polymorphism too so you may not care if the client code passes a List or a Vector, just require a Seq (possibly immutable) and you're done.
I guess to answer to this question you have to ask yourself another question: "Do I want to embrace FP in toto?". If the answer is yes then you shouldn't use Seq or any other abstract superclass in the OOP sense. Of course, the exception to this rule is the use of a trait/abstract class when defining ADTs in Scala. For example:
sealed trait Tree[+A]
case object Empty extends Tree[Nothing]
case class Node[A](value: A, left: Tree[A], right: Tree[A]) extends Tree[A]
In this case one would require Tree[A] as a type, of course, and then use, e.g., pattern matching to determine if it's either Empty or Node[A].
I guess my feeling about this subject is confirmed by the red book (Functional Programming in Scala). There they never use Seq, but List, Vector and so on. Also, haskellers, don't care about these problems and use lists whenever they need linked-list semantic and vectors whenever they need int-indexed-array semantic.
If, on the other hand, you want to embrace OOP and use Scala as a better Java then OK, you should follow the OOP best practice to talk to interfaces not to implementations.
If you're thinking: "I'd rather opt for mostly functional" then you should read Erik Meijer's The Curse of the Excluded Middle.

function currying vs an ordinary callback method

I'm trying to understand how the programming technique known as currying differs from an ordinary callback interface (such as the Observer/Observable interfaces in Java, or the classic Visitor design pattern).
I understand what currying is, I just don't understand why it's uniquely useful to the point that it requires its own terminology and language support.
Could someone explain a programming situation that is better solved by currying than by a callback method? What's the practical significance of the fact that currying uses a separate function for each argument?
[update:] to summarize the answers I got: currying comes part and parcel with the fact that functions are "first class" citizens, ie objects that can be created and passed around like any other object reference. This makes it possible to return a function from a function, in other words currying.
As for the reason why currying is useful, currying provides a syntax to let you concisely decorate function calls so that derived functions can be created with minimal boilerplate code overhead. Whereas in java you might create several overloaded or "wrapper" methods for each partial parameter set which ultimately invoke a master method containing all the parameters, currying provides a lighter syntax that lets you generate these "function wrappers" as needed in your code.
Currying and callbacks are two completely different technologies.
Callbacks are essentially a synonym for "passing a function to a function" (i.e. higher-order function that consumes a function); currying is a form of partial application, i.e. a function which isn't passed all of the parameters it expects returns a new function that only expects the free parameters.
Accordingly, they are not alternatives at all.
Currying is useful because it makes it much easier to concisely create functions that can be used as, for example, callbacks, or in a pointfree programme. It also means that you can, e.g. pass a callback to a function like map, and have a new function that applies your callback to every element of any list you care to pass to it.
Well, it's a point of language support.
In Java, for example, you can define all sorts of callback interfaces: one for parmeterless methods, one for methods with one argument, one for methods with two arguments, and so forth.
But wehn functions are first class citiziens, one does not need this: Single argument functions will do the job, because functions can be returned. Hence, one important interface in all "functional java" projects will be some interface of the form:
interface Fun<A,B> {
public B apply(A a);
}
or the like that covers this pattern.

Practical uses for Structural Types?

Structural types are one of those "wow, cool!" features of Scala. However, For every example I can think of where they might help, implicit conversions and dynamic mixin composition often seem like better matches. What are some common uses for them and/or advice on when they are appropriate?
Aside from the rare case of classes which provide the same method but aren't related nor do implement a common interface (for example, the close() method -- Source, for one, does not extend Closeable), I find no use for structural types with their present restriction. If they were more flexible, however, I could well write something like this:
def add[T: { def +(x: T): T }](a: T, b: T) = a + b
which would neatly handle numeric types. Every time I think structural types might help me with something, I hit that particular wall.
EDIT
However unuseful I find structural types myself, the compiler, however, uses it to handle anonymous classes. For example:
implicit def toTimes(count: Int) = new {
def times(block: => Unit) = 1 to count foreach { _ => block }
}
5 times { println("This uses structural types!") }
The object resulting from (the implicit) toTimes(5) is of type { def times(block: => Unit) }, ie, a structural type.
I don't know if Scala does that for every anonymous class -- perhaps it does. Alas, that is one reason why doing pimp my library that way is slow, as structural types use reflection to invoke the methods. Instead of an anonymous class, one should use a real class to avoid performance issues in pimp my library.
Structural types are very cool constructs in Scala. I've used them to represent multiple unrelated types that share an attribute upon which I want to perform a common operation without a new level of abstraction.
I have heard one argument against structural types from people who are strict about an application's architecture. They feel it is dangerous to apply a common operation across types without an associative trait or parent type, because you then leave the rule of what type the method should apply to open-ended. Daniel's close() example is spot on, but what if you have another type that requires different behavior? Someone who doesn't understand the architecture might use it and cause problems in the system.
I think structural types are one of these features that you don't need that often, but when you need it, it helps you a lot. One area where structural types really shine is "retrofitting", e.g. when you need to glue together several pieces of software you have no source code for and which were not intended for reuse. But if you find yourself using structural types a lot, you're probably doing it wrong.
[Edit]
Of course implicits are often the way to go, but there are cases when you can't: Imagine you have a mutable object you can modify with methods, but which hides important parts of it's state, a kind of "black box". Then you have to work somehow with this object.
Another use case for structural types is when code relies on naming conventions without a common interface, e.g. in machine generated code. In the JDK we can find such things as well, like the StringBuffer / StringBuilder pair (where the common interfaces Appendable and CharSequence are way to general).
Structural types gives some benefits of dynamic languages to a statically linked language, specifically loose coupling. If you want a method foo() to call instance methods of class Bar, you don't need an interface or base-class that is common to both foo() and Bar. You can define a structural type that foo() accepts and whose Bar has no clue of existence. As long as Bar contains methods that match the structural type signatures, foo() will be able to call.
It's great because you can put foo() and Bar on distinct, completely unrelated libraries, that is, with no common referenced contract. This reduces linkage requirements and thus further contributes for loose coupling.
In some situations, a structural type can be used as an alternative to the Adapter pattern, because it offers the following advantages:
Object identity is preserved (there is no separate object for the adapter instance, at least in the semantic level).
You don't need to instantiate an adapter - just pass a Bar instance to foo().
You don't need to implement wrapper methods - just declare the required signatures in the structural type.
The structural type doesn't need to know the actual instance class or interface, while the adapter must know Bar so it can call its methods. This way, a single structural type can be used for many actual types, whereas with adapter it's necessary to code multiple classes - one for each actual type.
The only drawback of structural types compared to adapters is that a structural type can't be used to translate method signatures. So, when signatures doesn't match, you must use adapters that will have some translation logic. I particularly don't like to code "intelligent" adapters because in many times they are more than just adapters and cause increased complexity. If a class client needs some additional method, I prefer to simply add such method, since it usually doesn't affect footprint.

Advantages of Scala's type system

I am exploring the Scala language. One claim I often hear is that Scala has a stronger type system than Java. By this I think what people mean is that:
scalac rejects certain buggy programs which javac will compile happily, only to cause a runtime error.
Certain invariants can be encoded in a Scala program such that the compiler won't let the programmer write code that violates the condition.
Am I right in thinking so?
The main advantage of the Scala Type system is not so much being stronger but rather being far richer (see "The Scala Type System").
(Java can define some of them, and implement others, but Scala has them built-in).
See also The Myth Makers 1: Scala's "Type Types", commenting Steve Yegge's blog post, where he "disses" Scala as "Frankenstein's Monster" because "there are type types, and type type types".
Value type classes (useful for reasonably small data structures that have value semantics) used instead of primitives types (Int, Doubles, ...), with implicit conversion to "Rich" classes for additional methods.
Nonnullable type
Monad types
Trait types (and the mixin composition that comes with it)
Singleton object types (just define an 'object' and you have one),
Compound types (intersections of object types, to express that the type of an object is a subtype of several other types),
Functional types ((type1, …)=>returnType syntax),
Case classes (regular classes which export their constructor parameters and which provide a recursive decomposition mechanism via pattern matching),
Path-dependent types (Languages that let you nest types provide ways to refer to those type paths),
Anonymous types (for defining anonymous functions),
Self types (can be used for instance in Trait),
Type aliases, along with:
package object (introduced in 2.8)
Generic types (like Java), with a type parameter annotation mechanism to control the subtyping behavior of generic types,
Covariant generic types: The annotation +T declares type T to be used only in covariant positions. Stack[T] is a subtype of Stack[S] if T is a subtype of S.
Contravariant generic types: -T would declare T to be used only in contravariant positions.
Bounded generic types (even though Java supports some part of it),
Higher kinded types, which allow one to express more advanced type relationships than is possible with Java Generics,
Abstract types (the alternative to generic type),
Existential types (used in Scala like the Java wildcard type),
Implicit types (see "The awesomeness of Scala is implicit",
View bounded types, and
Structural types, for specifing a type by specifying characteristics of the desired type (duck typing).
The main safety problem with Java relates to variance. Basically, a programmer can use incorrect variance declarations that may result in exceptions being thrown at run-time in Java, while Scala will not allow it.
In fact, the very fact that Java's Array is co-variant is already a problem, since it allows incorrect code to be generated. For instance, as exemplified by sepp2k:
String[] strings = {"foo"};
Object[] objects = strings;
objects[0] = new Object();
Then, of course, there are raw types in Java, which allows all sort of things.
Also, though Scala has it as well, there's casting. Java API is rich in type casts, and there's no idiom like Scala's case x: X => // x is now safely cast. Sure, one case use instanceof to accomplish that, but there's no incentive to do it. In fact, Scala's asInstanceOf is intentionally verbose.
These are the things that make Scala's type system stronger. It is also much richer, as VonC shows.

What compromises Scala made to run on JVM?

Scala is a wonderful language, but I wonder how could be improved if it had it's own runtime?
I.e. what design choices were made because of JVM choice?
The two most important compromises I know about are:
type erasure ("reflecting on Type"): It has to manage a Manifest to get around the Java compilation (independent of the JVM, for backward compatibility reason).
collection of primitive type: e.g.: arrays
new scheme of handling arrays in Scala 2.8. Instead of boxing/unboxing and other compiler magic the scheme relies on implicit conversions and manifests to integrate arrays
Those are the two main JVM limitations, when it comes to manage generic type (with bounds): The Java JVM does not keep the exact type use in a generic object, and it has "primitive" types.
But you could also consider:
Tail-call optimization is not yet full supported by the JVM, was hard to do anyway (and yet Scala 2.8 introduces the #tailrec annotation)
UAP (universal Access Principle) needs to be emulated (not supported by Java), and will be soon completed for Value Holder (#proxy)
the all mix-in mechanism needs also to be emulated
more generally, the huge number of static types introduced by Scala need (for most of them) to be generated in Java:
In order to cover as many possibilities as possible, Scala provides:
Conventional class types,
Value class types,
Nonnullable types,
Monad types,
Trait types,
Singleton object types (procedural modules, utility classes, etc.),
Compound types,
Functional types,
Case classes,
Path-dependent types,
Anonymous types,
Self types,
Type aliases,
Generic types,
Covariant generic types,
Contravariant generic types,
Bounded generic types,
Abstract types,
Existential types,
Implicit types,
Augmented types,
View bounded types, and
Structural types which allow a form of duck typing when all else fails
This article is a discussion with Martin Odersky (Scala's creator) and includes the compromises that were made in Scala for compatibility with Java. The article mentions:
Static overloading of methods
Having both traits and classes
Inclusion of null pointers.
Less an issue with the runtime than a cultural hangover: universal equality, hashing, toString.
More deeply tied to the VM: strict by default evaluation, impure functions, exceptions.