Consider the following simple snippet of PureScript code
a :: Int
a = 5
b :: Int
b = 7
c = a + b
main ∷ Effect Unit
main = do
logShow c
The program successfully infers the type of C to be Int, and outputs the expected result:
12
However, it also produces this warning:
No type declaration was provided for the top-level declaration of c.
It is good practice to provide type declarations as a form of documentation.
The inferred type of c was:
Int
in value declaration c
I find this confusing, since I would expect the Int type for C to be safely inferred. Like it often says in the docs, "why derive types when the compiler can do it for you?" This seems like a textbook example of the simplest and most basic type inference.
Is this warning expected? Is there a standard configuration that would suppress it?
Does this warning indicate that every variable should in fact be explicitly typed?
In most cases, and certainly in the simplest cases, the types can be inferred unambiguously, and indeed, in those cases type signatures are not necessary at all. This is why simpler languages, such as F#, Ocaml, or Elm, do not require type signatures.
But PureScript (and Haskell) has much more complicated cases too. Constrained types are one. Higher-rank types are another. It's a whole mess. Don't get me wrong, I love me some high-power type system, but the sad truth is, type inference works ambiguously with all of that stuff a lot of the time, and sometimes doesn't work at all.
In practice, even when type inference does work, it turns out that its results may be wildly different from what the developer intuitively expects, leading to very hard to debug issues. I mean, type errors in PureScript can be super vexing as it is, but imagine that happening across multiple top-level definitions, across multiple modules, even perhaps across multiple libraries. A nightmare!
So over the years a consensus has formed that overall it's better to have all the top-level definitions explicitly typed, even when it's super obvious. It makes the program much more understandable and puts constraints on the typechecker, providing it with "anchor points" of sorts, so it doesn't go wild.
But since it's not a hard requirement (most of the time), it's just a warning, not an error. You can ignore it if you wish, but do that at your own peril.
Now, another part of your question is whether every variable should be explicitly typed, - and the answer is "no".
As a rule, every top-level binding should be explicitly typed (and that's where you get a warning), but local bindings (i.e. let and where) don't have to, unless you need to clarify something that the compiler can't infer.
Moreover, in PureScript (and modern Haskell), local bindings are actually "monomorphised" - that's a fancy term basically meaning they can't be generic unless explicitly specified. This solves the problem of all the ambiguous type inference, while still working intuitively most of the time.
You can notice the difference with the following example:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s x = show x
On the second line s a <> s b you get an error saying "Could not match type b with type a"
This happens because the where-bound function s has been monomorphised, - meaning it's not generic, - and its type has been inferred to be a -> String based on the s a usage. And this means that s b usage is ill-typed.
This can be fixed by giving s an explicit type signature:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s :: forall x. Show x => x -> String
s x = show x
Now it's explicitly specified as generic, so it can be used with both a and b parameters.
Related
Here’s my thoughts on the question. Can anyone confirm, deny, or elaborate?
I wrote:
Scala doesn’t unify covariant List[A] with a GLB ⊤ assigned to List[Int], bcz afaics in subtyping “biunification” the direction of assignment matters. Thus None must have type Option[⊥] (i.e. Option[Nothing]), ditto Nil type List[Nothing] which can’t accept assignment from an Option[Int] or List[Int] respectively. So the value restriction problem originates from directionless unification and global biunification was thought to be undecidable until the recent research linked above.
You may wish to view the context of the above comment.
ML’s value restriction will disallow parametric polymorphism in (formerly thought to be rare but maybe more prevalent) cases where it would otherwise be sound (i.e. type safe) to do so such as especially for partial application of curried functions (which is important in functional programming), because the alternative typing solutions create a stratification between functional and imperative programming as well as break encapsulation of modular abstract types. Haskell has an analogous dual monomorphisation restriction. OCaml has a relaxation of the restriction in some cases. I elaborated about some of these details.
EDIT: my original intuition as expressed in the above quote (that the value restriction may be obviated by subtyping) is incorrect. The answers IMO elucidate the issue(s) well and I’m unable to decide which in the set containing Alexey’s, Andreas’, or mine, should be the selected best answer. IMO they’re all worthy.
As I explained before, the need for the value restriction -- or something similar -- arises when you combine parametric polymorphism with mutable references (or certain other effects). That is completely independent from whether the language has type inference or not or whether the language also allows subtyping or not. A canonical counter example like
let r : ∀A.Ref(List(A)) = ref [] in
r := ["boo"];
head(!r) + 1
is not affected by the ability to elide the type annotation nor by the ability to add a bound to the quantified type.
Consequently, when you add references to F<: then you need to impose a value restriction to not lose soundness. Similarly, MLsub cannot get rid of the value restriction. Scala enforces a value restriction through its syntax already, since there is no way to even write the definition of a value that would have polymorphic type.
It's much simpler than that. In Scala values can't have polymorphic types, only methods can. E.g. if you write
val id = x => x
its type isn't [A] A => A.
And if you take a polymorphic method e.g.
def id[A](x: A): A = x
and try to assign it to a value
val id1 = id
again the compiler will try (and in this case fail) to infer a specific A instead of creating a polymorphic value.
So the issue doesn't arise.
EDIT:
If you try to reproduce the http://mlton.org/ValueRestriction#_alternatives_to_the_value_restriction example in Scala, the problem you run into isn't the lack of let: val corresponds to it perfectly well. But you'd need something like
val f[A]: A => A = {
var r: Option[A] = None
{ x => ... }
}
which is illegal. If you write def f[A]: A => A = ... it's legal but creates a new r on each call. In ML terms it would be like
val f: unit -> ('a -> 'a) =
fn () =>
let
val r: 'a option ref = ref NONE
in
fn x =>
let
val y = !r
val () = r := SOME x
in
case y of
NONE => x
| SOME y => y
end
end
val _ = f () 13
val _ = f () "foo"
which is allowed by the value restriction.
That is, Scala's rules are equivalent to only allowing lambdas as polymorphic values in ML instead of everything value restriction allows.
EDIT: this answer was incorrect before. I have completely rewritten the explanation below to gather my new understanding from the comments under the answers by Andreas and Alexey.
The edit history and the history of archives of this page at archive.is provides a recording of my prior misunderstanding and discussion. Another reason I chose to edit rather than delete and write a new answer, is to retain the comments on this answer. IMO, this answer is still needed because although Alexey answers the thread title correctly and most succinctly—also Andreas’ elaboration was the most helpful for me to gain understanding—yet I think the layman reader may require a different, more holistic (yet hopefully still generative essence) explanation in order to quickly gain some depth of understanding of the issue. Also I think the other answers obscure how convoluted a holistic explanation is, and I want naive readers to have the option to taste it. The prior elucidations I’ve found don’t state all the details in English language and instead (as mathematicians tend to do for efficiency) rely on the reader to discern the details from the nuances of the symbolic programming language examples and prerequisite domain knowledge (e.g. background facts about programming language design).
The value restriction arises where we have mutation of referenced1 type parametrised objects2. The type unsafety that would result without the value restriction is demonstrated in the following MLton code example:
val r: 'a option ref = ref NONE
val r1: string option ref = r
val r2: int option ref = r
val () = r1 := SOME "foo"
val v: int = valOf (!r2)
The NONE value (which is akin to null) contained in the object referenced by r can be assigned to a reference with any concrete type for the type parameter 'a because r has a polymorphic type a'. That would allow type unsafety because as shown in the example above, the same object referenced by r which has been assigned to both string option ref and int option ref can be written (i.e. mutated) with a string value via the r1 reference and then read as an int value via the r2 reference. The value restriction generates a compiler error for the above example.
A typing complication arises to prevent3 the (re-)quantification (i.e. binding or determination) of the type parameter (aka type variable) of a said reference (and the object it points to) to a type which differs when reusing an instance of said reference that was previously quantified with a different type.
Such (arguably bewildering and convoluted) cases arise for example where successive function applications (aka calls) reuse the same instance of such a reference. IOW, cases where the type parameters (pertaining to the object) for a reference are (re-)quantified each time the function is applied, yet the same instance of the reference (and the object it points to) being reused for each subsequent application (and quantification) of the function.
Tangentially, the occurrence of these is sometimes non-intuitive due to lack of explicit universal quantifier ∀ (since the implicit rank-1 prenex lexical scope quantification can be dislodged from lexical evaluation order by constructions such as let or coroutines) and the arguably greater irregularity (as compared to Scala) of when unsafe cases may arise in ML’s value restriction:
Andreas wrote:
Unfortunately, ML does not usually make the quantifiers explicit in its syntax, only in its typing rules.
Reusing a referenced object is for example desired for let expressions which analogous to math notation, should only create and evaluate the instantiation of the substitutions once even though they may be lexically substituted more than once within the in clause. So for example, if the function application is evaluated as (regardless of whether also lexically or not) within the in clause whilst the type parameters of substitutions are re-quantified for each application (because the instantiation of the substitutions are only lexically within the function application), then type safety can be lost if the applications aren’t all forced to quantify the offending type parameters only once (i.e. disallow the offending type parameter to be polymorphic).
The value restriction is ML’s compromise to prevent all unsafe cases while also preventing some (formerly thought to be rare) safe cases, so as to simplify the type system. The value restriction is considered a better compromise, because the early (antiquated?) experience with more complicated typing approaches that didn’t restrict any or as many safe cases, caused a bifurcation between imperative and pure functional (aka applicative) programming and leaked some of the encapsulation of abstract types in ML functor modules. I cited some sources and elaborated here. Tangentially though, I’m pondering whether the early argument against bifurcation really stands up against the fact that value restriction isn’t required at all for call-by-name (e.g. Haskell-esque lazy evaluation when also memoized by need) because conceptually partial applications don’t form closures on already evaluated state; and call-by-name is required for modular compositional reasoning and when combined with purity then modular (category theory and equational reasoning) control and composition of effects. The monomorphisation restriction argument against call-by-name is really about forcing type annotations, yet being explicit when optimal memoization (aka sharing) is required is arguably less onerous given said annotation is needed for modularity and readability any way. Call-by-value is a fine tooth comb level of control, so where we need that low-level control then perhaps we should accept the value restriction, because the rare cases that more complex typing would allow would be less useful in the imperative versus applicative setting. However, I don’t know if the two can be stratified/segregated in the same programming language in smooth/elegant manner. Algebraic effects can be implemented in a CBV language such as ML and they may obviate the value restriction. IOW, if the value restriction is impinging on your code, possibly it’s because your programming language and libraries lack a suitable metamodel for handling effects.
Scala makes a syntactical restriction against all such references, which is a compromise that restricts for example the same and even more cases (that would be safe if not restricted) than ML’s value restriction, but is more regular in the sense that we’ll not be scratching our head about an error message pertaining to the value restriction. In Scala, we’re never allowed to create such a reference. Thus in Scala, we can only express cases where a new instance of a reference is created when it’s type parameters are quantified. Note OCaml relaxes the value restriction in some cases.
Note afaik both Scala and ML don’t enable declaring that a reference is immutable1, although the object they point to can be declared immutable with val. Note there’s no need for the restriction for references that can’t be mutated.
The reason that mutability of the reference type1 is required in order to make the complicated typing cases arise, is because if we instantiate the reference (e.g. in for example the substitutions clause of let) with a non-parametrised object (i.e. not None or Nil4 but instead for example a Option[String] or List[Int]), then the reference won’t have a polymorphic type (pertaining to the object it points to) and thus the re-quantification issue never arises. So the problematic cases are due to instantiation with a polymorphic object then subsequently assigning a newly quantified object (i.e. mutating the reference type) in a re-quantified context followed by dereferencing (reading) from the (object pointed to by) reference in a subsequent re-quantified context. As aforementioned, when the re-quantified type parameters conflict, the typing complication arises and unsafe cases must be prevented/restricted.
Phew! If you understood that without reviewing linked examples, I’m impressed.
1 IMO to instead employ the phrase “mutable references” instead of “mutability of the referenced object” and “mutability of the reference type” would be more potentially confusing, because our intention is to mutate the object’s value (and its type) which is referenced by the pointer— not referring to mutability of the pointer of what the reference points to. Some programming languages don’t even explicitly distinguish when they’re disallowing in the case of primitive types a choice of mutating the reference or the object they point to.
2 Wherein an object may even be a function, in a programming language that allows first-class functions.
3 To prevent a segmentation fault at runtime due to accessing (read or write of) the referenced object with a presumption about its statically (i.e. at compile-time) determined type which is not the type that the object actually has.
4 Which are NONE and [] respectively in ML.
I have a type class defined like this :
class Repo e ne | ne -> e, e -> ne where
eTable :: Table (Relation e)
And when I try to compile it I get this :
* Couldn't match type `Database.Selda.Generic.Rel
(GHC.Generics.Rep e0)'
with `Database.Selda.Generic.Rel (GHC.Generics.Rep e)'
Expected type: Table (Relation e)
Actual type: Table (Relation e0)
NB: `Database.Selda.Generic.Rel' is a type function, and may not be injective
The type variable `e0' is ambiguous
* In the ambiguity check for `eTable'
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
When checking the class method:
eTable :: forall e ne. Repo e ne => Table (Relation e)
In the class declaration for `Repo'
|
41 | eTable :: Table (Relation e)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I was expecting everything to be unambiguous since I've explicitly stated that e determines ne and vice versa.
However, if I try to define my class like this just for the testing purposes, it compiles :
data Test a = Test a
class Repo e ne | ne -> e, e -> ne where
eTable :: Maybe (Test e)
I'm not quite sure what is the deal with Table and Relation types that causes this.
Test is injective, since it is a type constructor.
Relation is not injective, since it is a type family.
Hence the ambiguity.
Silly example:
type instance Relation Bool = ()
type instance Relation String = ()
instance Repo Bool Ne where
eTable :: Table ()
eTable = someEtable1
instance Repo String Ne where
eTable :: Table ()
eTable = someEtable2
Now, what is eTable :: Table () ? It could be the one from the first or the second instance. It is ambiguous since Relation is not injective.
The source of the ambiguity actually has nothing to do with ne not being used in the class (which you headed off by using functional dependencies).
The key part of the error message is:
Expected type: Table (Relation e)
Actual type: Table (Relation e0)
NB: `Database.Selda.Generic.Rel' is a type function, and may not be injective
Note that it's the e that it's having trouble matching up, and the NB message drawing your attention to the issue of type functions and injectivity (you really have to know what that all means for the message to be useful, but it has all the terms you need to look up to understand what's going on, so it's quite good as programming error messages go).
The issue it's complaining about is a key difference between type constructors and type families. Type constructors are always injective, while type functions in general (and type families in particular) do not have to be.
In standard Haskell with no extensions, the only way you can build compound type expressions was using type constructors, such as the left-hand side Test in your data Test a = Test a. I can apply Test (of kind * -> *) to a type like Int (of kind *) to get a type Test Int (of kind *). Type constructors are injective, which means for any two distinct types a and b, Test a is a distinct type from Test b1. This means that when type checking you can "run them backwards"; if I've got two types t1 and t2 that are each the result of applying Test, and I know that t1 and t2 are supposed to be equal, then I can "unapply" Test to get the argument types and check whether those are equal (or infer what one of them is if it was something I hadn't figured out yet and the other is known, or etc).
Type families (or any other form of type function that isn't known to be injective) don't provide us that guarantee. If I have two types t1 and t2 that are supposed to be equal, and they're both the result of applying some TypeFamily, there's no way to go from the resulting types to the types that TypeFamily was applied to. And in particular, there's no way to conclude from the fact that TypeFamily a and TypeFamily b are equal that a and b are equal as well; the type family might just happen to map two distinct types a and b to the same result (the definition of injectivitiy is that it doesn't do that). So if I knew which type a was but didn't know b, knowing that TypeFamily a and TypeFamily b are equal doesn't give me any more information about what type b should be.
Unfortunately, since standard Haskell only has type constructors, Haskell programmers get well-trained to just presume that the type checker can work backwards through compound types to connect up the components. We often don't even notice that the type checker needs to work backwards, we're so used to just looking at type expressions with similar structure and leaping to the obvious conclusions without working through all the steps that the type checker has to go through. But because type checking is based on working out the type of every expression both bottom-up2 and top-down3 and confirming that they are consistent, type checking expressions whose types involve type families can easily run into ambiguity problems where it looks "obviously" unambiguous to us humans.
In your Repo example, consider how the type checker will deal with a position where you use eTable, with (Int, Bool) for e, say. The top-down view will see that it's used in a context where some Table (Relation (Int, Bool)) is required. It'll compute what Relation (Int, Bool) evaluates to: say it's Set Int, so we need Table (Set Int). The bottom-up pass just says eTable can be Table (Relation e) for any e.
All of our experience with standard Haskell tells us that this is obvious, we just instantiate e to (Int, Bool), Relation (Int, Bool) evaluates to Set Int again and we're done. But that's not actually how it works. Because Relation isn't injective there could be some other choice for e for which gives us Set Int for Relation e: perhaps Int. But if we choose e to be (Int, Bool) or Int we need to look for two different Repo instances, which will have different implementations for eTable, even though their type is the same.
Even adding a type annotation every time you use eTable like eTable :: Table (Relation (Int, Bool)) doesn't help. The type annotation only adds extra information to the top-down view, which we often already have anyway. The type-checker is still stuck with the problem that there could be (whether or not there actually are) other choices of e than (Int, Bool) which lead to eTable matching that type annotation, so it doesn't know which instance to use. Any possible use of eTable will have this problem, so it gets reported as an error when you're defining the class. It's basically for the same reason you get problems when you have a class with some members whose types don't use all of the type variables in the class head; you have to consider "variable only used under a type family" as much the same as "variable isn't used at all".
You could address this by adding a Proxy argument to eTable so that there's something fixing the type variable e that the type checker can "run backwards". So eTable :: Proxy e -> Table (Relation e).
Alternatively, with the TypeApplications extension you now can do as the error message suggests and turn on AllowAmbiguousTypes to get the class accepted, and then use things like eTable #(Int, Bool) to tell the compiler which choice for e you want. The reason this works where the type annotation eTable :: Table (Relation (Int, Bool)) doesn't work is the type annotation is extra information added to the context when the type checker is looking top-down, but the type application adds extra information when the type checker is looking bottom-up. Instead of "this expression is required to have a type that unifies with this type" it's "this polymorphic function is instantiated at this type".
1 Type constructors are actually even more restricted than just injectivity; applying Test to any type a results in a type with known structure Test a, so the entire universe of Haskell types is straightforwardly mirrored in types of the form Test t. A more general injective type function could instead do more "rearranging", such as mapping Int to Bool so long as it didn't also map Bool to Bool.
2 From the type produced by combining the sub-parts of the expression
3 From the type required of the context in which it is used
(emphasis mine)
Covariant redefinition of fields and functions provides no problems, but
covariant redefinition of arguments does create a problem that illegal
types can be passed as arguments.
But, if redefining field and function types covariantly causes no problems, then
how come redefining an argument's type covariantly can cause trouble?
Covariant redefinition equals subtyping, right? And subtypes can take the place of their supertypes!
What's the catch?
The issue is not with covariance itself. (In particular, if it were contra-variance, Design-by-Contract would be impossible, because argument types in the features of descendant classes would not necessary have features available in their parents. With covariance there is no such a problem.)
Problematic is a combination of covariance with polymorphism. E.g.
class A feature
foo (a: A) do a.bar end -- (1)
bar do end
end
class B inherit A redefine foo end feature
foo (a: B) do a.qux end -- (2)
qux do end
end
Now the following code would crash:
a: A; b: B
...
create b
a := b
a.foo (create {A})
Indeed, a.foo would call version (2) because a is attached to an object of type B. However, the argument passed to this feature will be of type A. And A has no feature qux that leads to a run-time error. This kind of an error is know as a CAT-call (Changing Availability or Type).
A solution to this issue is to avoid using covariance together with polymorphism, i.e. a call should not be polymorphic or there should be no covariant redeclaration of arguments. The work on this solution is in progress.
"a call should not be polymorphic or there should be no covariant redeclaration of arguments."
How can you tell?
Let's change your example a bit:
buzz (a_a: A)
do
a_a.foo (create {A})
end
This looks innocent enough. But if buzz receives an argument of dynamic type B you still get the catcall. The author of buzz might well be in a situation where the existence of B is unknown.
I think you need to drop the "a call should not be polymorphic or " bit of the advice. Simply prohibit covariant redeclaration of arguments.
I'm a beginner of Scala who is struggling with Scala syntax.
I got the line of code from https://www.tutorialspoint.com/scala/higher_order_functions.htm.
I know (x: A) is an argument of layout function
( which means argument x of Type A)
But what is [A] between layout and (x: A)?
I've been googling scala function syntax, couldn't find it.
def layout[A](x: A) = "[" + x.toString() + "]"
It's a type parameter, meaning that the method is parameterised (some also say "generic"). Without it, compiler would think that x: A denotes a variable of some concrete type A, and when it wouldn't find any such type it would report a compile error.
This is a fairly common thing in statically typed languages; for example, Java has the same thing, only syntax is <A>.
Parameterized methods have rules where the types can occur which involve concepts of covariance and contravariance, denoted as [+A] and [-A]. Variance is definitely not in the scope of this question and is probably too much for you too handle right now, but it's an important concept so I figured I'd just mention it, at least to let you know what those plus and minus signs mean when you see them (and you will).
Also, type parameters can be upper or lower bounded, denoted as [A <: SomeType] and [A >: SomeType]. This means that generic parameter needs to be a subtype/supertype of another type, in this case a made-up type SomeType.
There are even more constructs that contribute extra information about the type (e.g. context bounds, denoted as [A : Foo], used for typeclass mechanism), but you'll learn about those later.
This means that the method is using a generic type as its parameter. Every type you pass that has the definition for .toString could be passed through layout.
For example, you could pass both int and string arguments to layout, since you could call .toString on both of them.
val i = 1
val s = "hi"
layout(i) // would give "[1]"
layout(s) // would give "[hi]"
Without the gereric parameter, for this example you would have to write two definitions for layout: one that accepts integers as param, and one that accepts string as param. Even worse: every time you need another type you'd have to write another definition that accepts it.
Take a look at this example here and you'll understand it better.
I also recomend you to take a look at generic classes here.
A is a type parameter. Rather than being a data type itself (Ex. case class A), it is generic to allow any data type to be accepted by the function. So both of these will work:
layout(123f) [Float datatype] will output: "[123]"
layout("hello world") [String datatype] will output: "[hello world]"
Hence, whichever datatype is passed, the function will allow. These type parameters can also specify rules. These are called contravariance and covariance. Read more about them here!
I am trying to understand how to think about type classes in Haskell versus traits in Scala.
My understanding is that type classes are primarily important at compile time in Haskell and not at runtime anymore, on the other hand traits in Scala are important both at compile time and run time. I want to illustrate this idea with a simple example, and I want to know if this viewpoint of mine is correct or not.
First, let us consider type classes in Haskell:
Let's take a simple example. The type class Eq.
For example, Int and Char are both instances of Eq. So it is possible to create a polymorphic List that is also an instance of Eq and can either contain Ints or Chars but not both in the same List.
My question is : is this the only reason why type classes exist in Haskell?
The same question in other words:
Type classes enable to create polymorphic types ( in this example a polymorphic List) that support operations that are defined in a given type class ( in this example the operation == defined in the type class Eq) but that is their only reason for existence, according to my understanding. Is this understanding of mine correct?
Is there any other reason why type classes exist in ( standard ) Haskell?
Is there any other use case in which type classes are useful in standard Haskell ? I cannot seem to find any.
Since Haskell's Lists are homogeneous, it is not possible to put Char and Int into the same list. So the usefulness of type classes, according to my understanding, is exhausted at compile time. Is this understanding of mine correct?
Now, let's consider the analogous List example in Scala:
Lets define a trait Eq with an equals method on it.
Now let's make Char and Int implement the trait Eq.
Now it is possible to create a List[Eq] in Scala that accepts both Chars and Ints into the same List ( Note that this - putting different type of elements into the same List - is not possible Haskell, at least not in standard Haskell 98 without extensions)!
In the case of the Haskell's List, the existence of type classes is important/useful only for type checking at compile time, according to my understanding.
In contrast, the existence of traits in Scala is important both at compile time for type checking and at run type for polymorphic dispatch on the actual runtime type of the object in the List when comparing two Lists for equality.
So, based on this simple example, I came to the conclusion that in Haskell type classes are primarily important/used at compilation time, in contrast, Scala's traits are important/used both at compile time and run time.
Is this conclusion of mine correct?
If not, why not ?
EDIT:
Scala code in response to n.m.'s comments:
case class MyInt(i:Int) {
override def equals(b:Any)= i == b.asInstanceOf[MyInt].i
}
case class MyChar(c:Char) {
override def equals(a:Any)= c==a.asInstanceOf[MyChar].c
}
object Test {
def main(args: Array[String]) {
val l1 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l2 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('b'))
val l3 = List(MyInt(1), MyInt(2), MyChar('a'), MyChar('c'))
println(l1==l1)
println(l1==l3)
}
}
This prints:
true
false
I will comment on the Haskell side.
Type classes bring restricted polymorphism in Haskell, wherein a type variable a can still be quantified universally, but ranges over only a subset of all the types -- namely, the types for which an instance of the type class is available.
Why restricted polymorphism is useful? A nice example would be the equality operator
(==) :: ?????
What its type should be? Intuitively, it takes two values of the same type and returns a boolean, so:
(==) :: a -> a -> Bool -- (1)
But the typing above is not entirely honest, since it allows one to apply == to any type a, including function types!
(\x :: Integer -> x + x) == (\x :: Integer -> 2*x)
The above would pass type checking if (1) were the typing for (==), since both arguments are of the same type a = (Integer -> Integer). However, we can not effectively compare two functions: well-known Computability results tell us that there is no algorithm to do that in general.
So, what we could do to implement (==)?
Option 1: at run time, if a function (or any other value involving functions -- such as a list of functions) is found to be passed to (==), raise an exception. This is what e.g. ML does. Typed programs can now "go wrong", despite checking types at compile time.
Option 2: introduce a new kind of polymorphism, restricting a to the function-free types. For instance, ww could have (==) :: forall-non-fun a. a -> a -> Bool so that comparing functions yields to a type error. Haskell exploits type classes to obtain exactly that.
So, Haskell type classes allow one to type (==) "honestly", ensuring no error at run time, and without being overly restrictive. Of course, the power of type classes goes far beyond of that but, at least in my own view, they primary purpose is to allow restricted polymorphism, in a very general and flexible way. Indeed, with type classes the programmer can define their own restrictions on the universal type quantifications.