XText establishing cross-references for a generic type - eclipse

This is what I want I am trying to parse
type Number(); // define a type called "Number" with no member variables
type Point(Number x, Number y); // define a type called Point with member variables
// and something with generic types
type Pair[T, V](T first, V second);
// and even something cyclic:
type LinkedList[T](T payload, LinkedList[T] rest);
And here's my xtext grammar that allows it:
TypeDecl returns SSType:
'type' name=TypeName
('[' typeParams += TypeName (',' typeParams += TypeName)* ']')?
'(' (args += Arg (',' args += Arg)*)? ')' ';'
;
TypeName returns SSTypeName:
name=ID
;
Type:
tn = [SSTypeName] ('[' typeParams += Type (',' typeParams += Type)* ']')?
;
Arg:
type = Type argName = ID
;
Which works, but is way too liberal in what it accepts. If something is declared as a generic (e.g. the LinkedList in the above example) it should only be valid to use it as a generic (e.g. LinkedList[Number] and not LinkedList) and ideally the arity of the type arguments would be enforced.
And of course, if something is declared to not be a generic type (e.g. Number), it shouldn't be valid to give it type arguments.
Example of stuff it will wrongly accept:
type Wrong1(Number[Blah] a); // number doesn't have type arguments
type Wrong2(Pair a); // Pair has type arguments
type Wrong3(Pair[Number, Number, Number] a); // wrong arity
Any suggestions, comments, code or tips on how to do this properly would be much appreciated.

You should enforce the correct number of type arguments in your validator. It often better to have a liberal scope provider and a strict validator to provide better error messages.

Related

Weird unwraping of tuples in Swift

On the picture below you can see the code executed in a playground which doesn't feel right. But the Swift compiler is completely fine with it. For some reason the embedment depth of the tuple is reduced to one.
To be more expressive:
The first map call causes a compilation error. (Strange!)
The second call is okay. (Strange!)
Does anybody know is it a bug or a feature?
(Int, Int) is a tuple type (where the parantheses are part of its type), just as ((Int, Int)) is the same tuple type, but wrapped in an extra pair of (redundant) parantheses, just as (((Int, Int))) is the same tuple type as the two previous ones, but wrapped in two sets of (redundant) parantheses.
var a: (((Int, Int)))
print(type(of: a)) // (Int, Int)
Additional parantheses only come into effect if you start combining different types on a nested level, e.g.
var a: ((Int, Int), Int)
print(type(of: a)) // ((Int, Int), Int)`.
Now, why do the first map closure fail, whereas the 2nd do not?
When use trailing closures, you may either
Use shorthand argument name ($0, ...), or
Use explicitly named (or explicitly name-omitted, '_') parameters.
Both your examples attempt to use named parameters, but only the 2nd example follows the rules for using name parameters: namely, that the parameter names must be supplied (supplying parameter types and closure return type is optional, but in some cases needed due to compiler type inference limitations).
Study the following examples:
/* all good */
arr.map { (a) in
2*a /* ^-- explicitly name parameter */
}
// or
arr.map { a in
2*a /* ^-- explicitly name parameter */
}
/* additional parantheses:
error: unnamed parameters must be written with the empty name '_'
the closure now believes we want to supply parameter name as well as
an explicit type annotation for this parameter */
arr.map { ((a)) in
2*a /* ^-- compiler believes this is now a _type_, and prior to realizing
that it is not, prompts us for the error that we have not
supplied a parameter name/explicly omitted one with '_' */
}
/* additional parantheses:
error: use of undeclared type 'a' */
arr.map { (_: (a)) in
1 /* ^-- type omitted: compiler now realize that 'a' is not a type */
}
/* fixed: all good again! */
arr.map { (_: (Int)) in
1
}
In your first example you wrap your tuple in the attempted naming of the tuple elements (in the closure's .... in part) in paranthesis (just as the errors shown above), which means Swift believes it to be a type (type (x, y)), in which case the compiler requires including an internal parameter name or explicitly omitting one (using _). Only when you supply a parameter name will the compiler realize that x and y are not valid types.
In your 2nd example you simply directly bind the closures two tuple members to the internal parameter names x and y, choosing not to explicitly type annotate these parameters (which is ok).

Why do I need to specify types for don't care inputs?

private val alwaysTrue = (_, _) => true
Causes the compiler to complain that it needs the types for both _'s. Why? They're just discarded anyway, shouldn't they be inferred to be Scala.Any?
You must explicitly provide the parameter types for anonymous functions, unless something else expects a specific type--in which case the compiler will try to infer that type, if it can. It's in the SLS 6.23 :
If the expected type of the anonymous function is of the form scala.Functionn[S1,…,Sn, R], the expected type of e is R and the type Ti of any of the parameters xi can be omitted, in which caseTi = Si is assumed. If the expected type of the anonymous function is some other type, all formal parameter types must be explicitly given, and the expected type of e is undefined.
I'm reading between the lines just a bit, but there is no expected type, so you must explicitly provide the types.
private val alwaysTrue = (_: Any, _: Any) => true
In cases where you have something like List(1, 2, 3).filter(_ > 3), the expected type is Int => Boolean, so it isn't necessary to provide the parameter type.

Scala priority of method call on implicit object

Let's say I have the following scala code:
case class Term(c:Char) {
def unary_+ = Plus(this)
}
case class Plus(t:Term)
object Term {
implicit def fromChar(c:Char) = Term(c)
}
Now I get this from the scala console:
scala> val p = +'a'
p: Int = 97
scala> val q:Plus = +'a'
<console>:16: error: type mismatch;
found : Int
required: Plus
val q:Plus = +'a'
^
Because '+' is already present on the Char type, the implicit conversion does not take place, I think. Is there a way to override the default behaviour and apply '+' on the converted Term before applying on the Char type?
(BTW, the example is artificial and I'm not looking for alternative designs. The example is just here to illustrate the problem)
No, there is no way to override the default + operator, not even with an implicit conversion. When it encounters an operator (actually a method, as operators are just plain methods) that is not defined on the receiving object, the compiler will look for an implicit conversion to an object to does provide this operator. But if the operator is already defined on the target object, it will never look up for any conversion, the original operator will always be called.
You should thus define a separate operator whose name will not conflict with any preexisting operator.
UPDATE:
The precise rules that govern implicit conversions are defined in the Scala Language Specification:
Views are applied in three situations.
If an expression e is of type T , and T does not conform to the expression’s
expected type pt. In this case an implicit v is searched which is applicable to
e and whose result type conforms to pt. The search proceeds as in the case of
implicit parameters, where the implicit scope is the one of T => pt. If such a
view is found, the expression e is converted to v(e).
In a selection e.m with e of type T , if the selector m does not denote a member
of T . In this case, a view v is searched which is applicable to e and whose result
contains a member named m. The search proceeds as in the case of implicit
parameters, where the implicit scope is the one of T . If such a view is found,
the selection e.m is converted to v(e).m.
In a selection e.m(args) with e of type T , if the selector m denotes some member(s) of T , but none of these members is applicable to the arguments args. In
this case a view v is searched which is applicable to e and whose result contains a method m which is applicable to args. The search proceeds as in the
case of implicit parameters, where the implicit scope is the one of T . If such a
view is found, the selection e.m is converted to v(e).m(args).
In other words, an implicit conversion occurs in 3 situations:
when an expression is of type T but is used in a context where the unrelated type T' is expected, an implicit conversion from T to T' (if any such conversion is in scope) is applied.
when trying to access an object's member that does not exists on said object, an implicit conversion from the object into another object that does have this member (if any such conversion is in scope) is applied.
when trying to call a method of an object's with a parameter list that does not match any of the corresponding overloads, the compiler applies an implicit conversion from the object into another object that does have a method of this name and with a compatible parameter list (if any such conversion is in scope).
Note for completeness that this actually applies to more than just methods (inner objects/vals with an apply method are eligible too). Note also that this is the case that Randall Schulz was talking about in his comment below.
So in your case, points (2) and (3) are relevant. Given that you want to define a method named unary_+, which already exists for type Int, case (2) won't kick in. And given that your version has the same parameter list as the built-in Int.unary_+ method (they are both parameterless), point (3) won't kick in either. So you definitly cannot define an implicit that will redefine unary_+.

class A has one type parameter, but type B has one

Recently I stumbled across a strange (to me) compiler error message. Consider the following code:
trait Foo {
type Res <: Foo
type Bar[X <: Res]
}
class MyFoo extends Foo {
override type Res = MyFoo
override type Bar[X <: Res] = List[X]
}
type FOO[F <: Foo, R <: Foo, B[_ <: R]] = F { type Res = R;
type Bar[X <: R] = B[X] }
def process[F <: Foo, R <: Foo, B[_ <: R]](f: FOO[F, R, B]) {}
Now, if I want to call the process method I have to explicitly write the type parameters:
process[MyFoo, MyFoo, List](new MyFoo) // fine
If I write:
process(new MyFoo)
or
process((new MyFoo): FOO[MyFoo, MyFoo, List])
I get the following error message:
inferred kinds of the type arguments (MyFoo,MyFoo,List[X]) do not conform to the expected kinds of the type parameters (type F,type R,type B). List[X]'s type parameters do not match type B's expected parameters: class List has one type parameter, but type B has one
Why isn´t the compiler able to infer the types (although I explicitly stated them at call parameter)? And what does that class List has one type parameter, but type B has one mean? Something has one, but the other has also one, and that´s why they don´t fit together???
If we look to the Scala compiler, the sources could help us understanding what the problem is. I have never contributed to the Scala compiler, but I found the sources very readable and I have already investigated on that.
The class responsible for type inference is scala.tools.nsctypechecker.Infer which you can find simply by looking in the Scala compiler sources for a part of your error. You'll find out the following fragment:
/** error if arguments not within bounds. */
def checkBounds(pos: Position, pre: Type, owner: Symbol,
tparams: List[Symbol], targs: List[Type], prefix: String) = {
//#M validate variances & bounds of targs wrt variances & bounds of tparams
//#M TODO: better place to check this?
//#M TODO: errors for getters & setters are reported separately
val kindErrors = checkKindBounds(tparams, targs, pre, owner)
if(!kindErrors.isEmpty) {
error(pos,
prefix + "kinds of the type arguments " + targs.mkString("(", ",", ")") +
" do not conform to the expected kinds of the type parameters "+ tparams.mkString("(", ",", ")") + tparams.head.locationString+ "." +
kindErrors.toList.mkString("\n", ", ", ""))
}
So now the point is understanding why checkKindBounds(tparams, targs, pre, owner) returns those errors. If you go down the method call chain, you will see that the checkKindBounds call another method
val errors = checkKindBounds0(tparams, targs, pre, owner, true)
You'll see the problem is connected to checking bounds of higher-kinded type, at line 5784, inside checkKindBoundsHK :
if (!sameLength(hkargs, hkparams)) {
if (arg == AnyClass || arg == NothingClass) (Nil, Nil, Nil) // Any and Nothing are kind-overloaded
else {error = true; (List((arg, param)), Nil, Nil) } // shortcut: always set error, whether explainTypesOrNot
}
The test is not passed, it appears that in my debugger:
hkargs$1 = {scala.collection.immutable.Nil$#2541}"List()"
arg$1 = {scala.tools.nsc.symtab.Symbols$ClassSymbol#2689}"class List"
param$1 = {scala.tools.nsc.symtab.Symbols$TypeSymbol#2557}"type B"
paramowner$1 = {scala.tools.nsc.symtab.Symbols$MethodSymbol#2692}"method process"
underHKParams$1 = {scala.collection.immutable.$colon$colon#2688}"List(type R)"
withHKArgs$1 = {scala.collection.immutable.Nil$#2541}"List()"
exceptionResult12 = null
hkparams$1 = {scala.collection.immutable.$colon$colon#2688}"List(type R)"
So it appears like there is one higher kinded param, type R, but there is no provided value for that.
If you actually go back to the to checkKindBounds, you see that after the snippet:
val (arityMismatches, varianceMismatches, stricterBounds) = (
// NOTE: *not* targ.typeSymbol, which normalizes
checkKindBoundsHK(tparamsHO, targ.typeSymbolDirect, tparam, tparam.owner, tparam.typeParams, tparamsHO)
)
the arityMismatches contains a tuple List, B. And now you can also see that the error message is wrong:
inferred kinds of the type arguments (MyFoo,MyFoo,List[X]) do not
conform to the expected kinds of the type parameters (type F,type
R,type B). List[X]'s type parameters do not match type B's expected
parameters: class List has one type parameter, but type B has ZERO
In fact if you put a breakpoint at line 5859 on the following call
checkKindBoundsHK(tparamsHO, targ.typeSymbolDirect, tparam, tparam.owner, tparam.typeParams, tparamsHO)
you can see that
tparam = {scala.tools.nsc.symtab.Symbols$TypeSymbol#2472}"type B"
targ = {scala.tools.nsc.symtab.Types$UniqueTypeRef#2473}"List[X]"
Conclusion:
For some reason, when dealing with complex higher-kinded types such as yours, Scala compiler inference is limited. I don't know where it does come from, maybe you want to send a bug to the compiler team
I only have a vague understanding of the exact workings of the type inferrer in Scala so consider this ideas not definitive answers.
Type inferring has problems with inferring more then one type at once.
You use an existential type in the definition of FOO, which translates to: there exists a type such, not sure if this is compatible with the specific type given in MyFoo

What is the point of multiple parameter clauses in function definitions in Scala?

I'm trying to understand the point of this language feature of multiple parameter clauses and why you would use it.
Eg, what's the difference between these two functions really?
class WTF {
def TwoParamClauses(x : Int)(y: Int) = x + y
def OneParamClause(x: Int, y : Int) = x + y
}
>> val underTest = new WTF
>> underTest.TwoParamClauses(1)(1) // result is '2'
>> underTest.OneParamClause(1,1) // result is '2'
There's something on this in the Scala specification at point 4.6. See if that makes any sense to you.
NB: the spec calls these 'parameter clauses', but I think some people may also call them 'parameter lists'.
Here are three practical uses of multiple parameter lists,
To aid type inference. This is especially useful when using higher order methods. Below, the type parameter A of g2 is inferred from the first parameter x, so the function arguments in the second parameter f can be elided,
def g1[A](x: A, f: A => A) = f(x)
g1(2, x => x) // error: missing parameter type for argument x
def g2[A](x: A)(f: A => A) = f(x)
g2(2) {x => x} // type is inferred; also, a nice syntax
For implicit parameters. Only the last parameter list can be marked implicit, and a single parameter list cannot mix implicit and non-implicit parameters. The definition of g3 below requires two parameter lists,
// analogous to a context bound: g3[A : Ordering](x: A)
def g3[A](x: A)(implicit ev: Ordering[A]) {}
To set default values based on previous parameters,
def g4(x: Int, y: Int = 2*x) {} // error: not found value x
def g5(x: Int)(y: Int = 2*x) {} // OK
TwoParamClause involves two method invocations while the OneParamClause invokes the function method only once. I think the term you are looking for is currying. Among the many use cases, it helps you to breakdown the computation into small steps. This answer may convince you of usefulness of currying.
There is a difference between both versions concerning type inference. Consider
def f[A](a:A, aa:A) = null
f("x",1)
//Null = null
Here, the type A is bound to Any, which is a super type of String and Int. But:
def g[A](a:A)(aa:A) = null
g("x")(1)
error: type mismatch;
found : Int(1)
required: java.lang.String
g("x")(1)
^
As you see, the type checker only considers the first argument list, so A gets bound to String, so the Int value for aa in the second argument list is a type error.
Multiple parameter lists can help scala type inference for more details see: Making the most of Scala's (extremely limited) type inference
Type information does not flow from left to right within an argument list, only from left to right across argument lists. So, even though Scala knows the types of the first two arguments ... that information does not flow to our anonymous function.
...
Now that our binary function is in a separate argument list, any type information from the previous argument lists is used to fill in the types for our function ... therefore we don't need to annotate our lambda's parameters.
There are some cases where this distinction matters:
Multiple parameter lists allow you to have things like TwoParamClauses(2); which is automatically-generated function of type Int => Int which adds 2 to its argument. Of course you can define the same thing yourself using OneParamClause as well, but it will take more keystrokes
If you have a function with implicit parameters which also has explicit parameters, implicit parameters must be all in their own parameter clause (this may seem as an arbitrary restriction but is actually quite sensible)
Other than that, I think, the difference is stylistic.