In Eiffel, it is possible to specify a type with an 'anchored declaration'.
I wonder if the relevant invariants in the class also apply to an anchored declaration:
class C
feature
f: INTEGER
do
... Do something ...
end
g: like f
do
... Do some other thing ...
end
invariant
0 < f
-- 0 < g <-- Does this pop into existence?
end
I didn't see this written anywhere, and I think it's not the case. It would be convenient, sometimes, to avoid defining yet another type, but I think that would restrict the usefulness of anchored declarations in all other cases.
No, it is not possible to automagically create an invariant from an anchored declaration. In the line:
g: like f
the anchor type "like f" only replace the type of "g". It is very similar to copy and paste the type of "f" as the type of "g". In other word, in your example, what you write is almost the same as writing directly:
g: INTEGER
Related
Consider the following simple snippet of PureScript code
a :: Int
a = 5
b :: Int
b = 7
c = a + b
main ∷ Effect Unit
main = do
logShow c
The program successfully infers the type of C to be Int, and outputs the expected result:
12
However, it also produces this warning:
No type declaration was provided for the top-level declaration of c.
It is good practice to provide type declarations as a form of documentation.
The inferred type of c was:
Int
in value declaration c
I find this confusing, since I would expect the Int type for C to be safely inferred. Like it often says in the docs, "why derive types when the compiler can do it for you?" This seems like a textbook example of the simplest and most basic type inference.
Is this warning expected? Is there a standard configuration that would suppress it?
Does this warning indicate that every variable should in fact be explicitly typed?
In most cases, and certainly in the simplest cases, the types can be inferred unambiguously, and indeed, in those cases type signatures are not necessary at all. This is why simpler languages, such as F#, Ocaml, or Elm, do not require type signatures.
But PureScript (and Haskell) has much more complicated cases too. Constrained types are one. Higher-rank types are another. It's a whole mess. Don't get me wrong, I love me some high-power type system, but the sad truth is, type inference works ambiguously with all of that stuff a lot of the time, and sometimes doesn't work at all.
In practice, even when type inference does work, it turns out that its results may be wildly different from what the developer intuitively expects, leading to very hard to debug issues. I mean, type errors in PureScript can be super vexing as it is, but imagine that happening across multiple top-level definitions, across multiple modules, even perhaps across multiple libraries. A nightmare!
So over the years a consensus has formed that overall it's better to have all the top-level definitions explicitly typed, even when it's super obvious. It makes the program much more understandable and puts constraints on the typechecker, providing it with "anchor points" of sorts, so it doesn't go wild.
But since it's not a hard requirement (most of the time), it's just a warning, not an error. You can ignore it if you wish, but do that at your own peril.
Now, another part of your question is whether every variable should be explicitly typed, - and the answer is "no".
As a rule, every top-level binding should be explicitly typed (and that's where you get a warning), but local bindings (i.e. let and where) don't have to, unless you need to clarify something that the compiler can't infer.
Moreover, in PureScript (and modern Haskell), local bindings are actually "monomorphised" - that's a fancy term basically meaning they can't be generic unless explicitly specified. This solves the problem of all the ambiguous type inference, while still working intuitively most of the time.
You can notice the difference with the following example:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s x = show x
On the second line s a <> s b you get an error saying "Could not match type b with type a"
This happens because the where-bound function s has been monomorphised, - meaning it's not generic, - and its type has been inferred to be a -> String based on the s a usage. And this means that s b usage is ill-typed.
This can be fixed by giving s an explicit type signature:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s :: forall x. Show x => x -> String
s x = show x
Now it's explicitly specified as generic, so it can be used with both a and b parameters.
Given the following class definition
class X[+T] {
def get[U>:T](default:U):U
}
why T in the method def get[U>:T](default:U) is in the covariance position
Given the following class definition
class X[-T] {
def get[U<:T](default:U):U
}
why T in the method def get[U<:T](default:U) is in the cotravariance position
It is hard to answer "Why?" question without you providing more details but I'll try. I assume that your question is really "why type restriction on U are not inverted?". The short answer: because this is type-safe and covers some cases that are not covered otherwise.
Your first example is probably inspired by Option[T] and its getOrElse method. Although I'm not sure why anybody needs getOrElse with U different from T, logic why type restriction can be only U>:T seems obvious to me. Let's assume you have 3 classes: C which inherits B which inherits A and you have an Option[B]. If your default value is already B or C you don't need anything beyond U = T and thus simpler signature without additional generic U would suffice. The only case when you can't pass default value to the getOrElse method is if you have it of some type which is not a subtype of B (such as A). Let's extend this signature even more for a moment
def getOrElse[U, R](default:U): R
How types U, T and R should be related? Obviously R should be some common super-type of U and T because it should be able to contain both T and U. However such definition would be an overkill. First of all it is really weird to have default value of a type that is not related to the T at all. Secondly, even if it is such a strange case, you (and compiler) still can calculate a common super-type and assign it to some new U' = R'. Thus you don't need R but adding U adds some more flexibility (at least theoretically). But U still has to be some super-type of T because it is also the return type.
So to sum up: adding U with U<:T will
either produce wrong code if you use U as the result type (U as a result type can't hold T)
or if you use T as the result type would not extend applicability of this method i.e. would not allow any code that does not compile without U to compile with this additional U.
But adding U with U>:T will allow some more code which is actually type-safe to compile such as (yes, stupid example but as I said I don't know any real life examples):
val opt = Option[Int](1)
val orElse: Any = opt.getOrElse(List())
(emphasis mine)
Covariant redefinition of fields and functions provides no problems, but
covariant redefinition of arguments does create a problem that illegal
types can be passed as arguments.
But, if redefining field and function types covariantly causes no problems, then
how come redefining an argument's type covariantly can cause trouble?
Covariant redefinition equals subtyping, right? And subtypes can take the place of their supertypes!
What's the catch?
The issue is not with covariance itself. (In particular, if it were contra-variance, Design-by-Contract would be impossible, because argument types in the features of descendant classes would not necessary have features available in their parents. With covariance there is no such a problem.)
Problematic is a combination of covariance with polymorphism. E.g.
class A feature
foo (a: A) do a.bar end -- (1)
bar do end
end
class B inherit A redefine foo end feature
foo (a: B) do a.qux end -- (2)
qux do end
end
Now the following code would crash:
a: A; b: B
...
create b
a := b
a.foo (create {A})
Indeed, a.foo would call version (2) because a is attached to an object of type B. However, the argument passed to this feature will be of type A. And A has no feature qux that leads to a run-time error. This kind of an error is know as a CAT-call (Changing Availability or Type).
A solution to this issue is to avoid using covariance together with polymorphism, i.e. a call should not be polymorphic or there should be no covariant redeclaration of arguments. The work on this solution is in progress.
"a call should not be polymorphic or there should be no covariant redeclaration of arguments."
How can you tell?
Let's change your example a bit:
buzz (a_a: A)
do
a_a.foo (create {A})
end
This looks innocent enough. But if buzz receives an argument of dynamic type B you still get the catcall. The author of buzz might well be in a situation where the existence of B is unknown.
I think you need to drop the "a call should not be polymorphic or " bit of the advice. Simply prohibit covariant redeclaration of arguments.
I'm a beginner of Scala who is struggling with Scala syntax.
I got the line of code from https://www.tutorialspoint.com/scala/higher_order_functions.htm.
I know (x: A) is an argument of layout function
( which means argument x of Type A)
But what is [A] between layout and (x: A)?
I've been googling scala function syntax, couldn't find it.
def layout[A](x: A) = "[" + x.toString() + "]"
It's a type parameter, meaning that the method is parameterised (some also say "generic"). Without it, compiler would think that x: A denotes a variable of some concrete type A, and when it wouldn't find any such type it would report a compile error.
This is a fairly common thing in statically typed languages; for example, Java has the same thing, only syntax is <A>.
Parameterized methods have rules where the types can occur which involve concepts of covariance and contravariance, denoted as [+A] and [-A]. Variance is definitely not in the scope of this question and is probably too much for you too handle right now, but it's an important concept so I figured I'd just mention it, at least to let you know what those plus and minus signs mean when you see them (and you will).
Also, type parameters can be upper or lower bounded, denoted as [A <: SomeType] and [A >: SomeType]. This means that generic parameter needs to be a subtype/supertype of another type, in this case a made-up type SomeType.
There are even more constructs that contribute extra information about the type (e.g. context bounds, denoted as [A : Foo], used for typeclass mechanism), but you'll learn about those later.
This means that the method is using a generic type as its parameter. Every type you pass that has the definition for .toString could be passed through layout.
For example, you could pass both int and string arguments to layout, since you could call .toString on both of them.
val i = 1
val s = "hi"
layout(i) // would give "[1]"
layout(s) // would give "[hi]"
Without the gereric parameter, for this example you would have to write two definitions for layout: one that accepts integers as param, and one that accepts string as param. Even worse: every time you need another type you'd have to write another definition that accepts it.
Take a look at this example here and you'll understand it better.
I also recomend you to take a look at generic classes here.
A is a type parameter. Rather than being a data type itself (Ex. case class A), it is generic to allow any data type to be accepted by the function. So both of these will work:
layout(123f) [Float datatype] will output: "[123]"
layout("hello world") [String datatype] will output: "[hello world]"
Hence, whichever datatype is passed, the function will allow. These type parameters can also specify rules. These are called contravariance and covariance. Read more about them here!
A function of mine is to take from zero to five integer arguments and from zero to five string arguments. So I consider it to define as a function of 2 lists: f(numbers: List[Int], strings: List[String]). But I think it is good to constraint the lengths if it is possible, for an IDE and/or a compiler can enforce it. Is this possible?
I think you're really asking a lot of the type system for this one... This is a classic task for dependently typed programming, in which category Scala unfortunately does not belong.
You could look at Mark Harrah's type-level Naturals:
type _0 = Nat0
type _1 = Succ[_0]
type _2 = Succ[_1]
// ...
But if you go down this route, you'll have to build all your lists in such a way that the length-type is evident to the compiler. That means no recursion, no unbounded looping, etc. Also you'll have to come up with a way to encode "<" in the type system... since you can't do recursion I'm not sure how you'd do that. So, probably not worth it.
Maybe another way to approach the problem is to figure out where '0..5' comes from and constrain some other type based on that information?
As a last resort you could define special cases for the allowable sizes, separated so that you don't have 25 cases:
case class Small[+X](l: List[X])
def small(): Small[Nothing] = Small(List())
def small[A](a: A): Small[A] = Small(List(a))
def small[A](a1: A, a2: A): Small[A] = Small(List(a1,a2))