Type checker phases - scala

I noticed that the type checker works by phases. Sometimes scalac returns only a few errors, which makes you think that are almost there, but once you fix them all – boom – next phases, you suddenly get a lot of errors that where not there before.
What are the different phases of the type checker?
Is there a way to know in which phase the type checker gave up on my code (other than recognizing the errors)?

As #Felix points out, this answer lists the compile phases:
$ scalac -version
Scala compiler version 2.11.6 -- Copyright 2002-2013, LAMP/EPFL
$ scalac -Xshow-phases
phase name id description
---------- -- -----------
parser 1 parse source into ASTs, perform simple desugaring
namer 2 resolve names, attach symbols to named trees
packageobjects 3 load package objects
typer 4 the meat and potatoes: type the trees
patmat 5 translate match expressions
superaccessors 6 add super accessors in traits and nested classes
extmethods 7 add extension methods for inline classes
pickler 8 serialize symbol tables
refchecks 9 reference/override checking, translate nested objects
uncurry 10 uncurry, translate function values to anonymous classes
tailcalls 11 replace tail calls by jumps
specialize 12 #specialized-driven class and method specialization
explicitouter 13 this refs to outer pointers
erasure 14 erase types, add interfaces for traits
posterasure 15 clean up erased inline classes
lazyvals 16 allocate bitmaps, translate lazy vals into lazified defs
lambdalift 17 move nested functions to top level
constructors 18 move field definitions into constructors
flatten 19 eliminate inner classes
mixin 20 mixin composition
cleanup 21 platform-specific cleanups, generate reflective calls
delambdafy 22 remove lambdas
icode 23 generate portable intermediate code
jvm 24 generate JVM bytecode
terminal 25 the last phase during a compilation run
Is there a way to know in which phase the type checker gave up on my code (other than recognizing the errors)?
If you add the -verbose flag to scalac, it will print the name of each phase and how long it took after that phase completes. You can then infer which phase failed
I don't think that scalac exposes different phases of typing, only the compile phases. The typer is listed as a single compile phase.

The compiler option for this is -Yissue-debug. It outputs a stack trace in 2.10 when an error is issued.
The code supporting it was removed in 2.11 during the refactor of reporting, but the option is still valid. (I restored it at some point because, in fact, it's the quickest way to see what is emitting an error; but apparently that PR died on the vine and disappeared. Probably a victim of push -f.)
In 2.12, you can supply a custom reporter that does just about the same thing. They claim that they will augment the reporter API with access to context, so you could perhaps directly query the current phase, inspect trees, and so on.
Here's an example of the situation you describe:
class C { def f: Int ; def g: Int = "" ; def h = "\000" }
There are three errors, but only one is reported at a time because they are emitted in different compiler phases.
To clarify the question, various phases besides "typer" can create and typecheck trees, and additionally can enforce well-typedness even after trees are typed. (There are also other kinds of errors, such as "unable to write the output file.")
For C, the parser emits an error (under -Xfuture) for the deprecated octal syntax, the typer reports the type mismatch in g, and the grab bag refchecks phase checks the declared but undefined (empty) member f. You would normally wind up fixing one error at a time. If the parser error is emitted as a warning, then the warning would be suppressed until the errors were fixed, so that would be the last to pop up instead of the first.
Here is a sample reporter that tries to do more than output huge stack traces.
package myrep
import scala.tools.nsc.Settings
import scala.tools.nsc.reporters.ConsoleReporter
import scala.reflect.internal.util._
class DebugReporter(ss: Settings) extends ConsoleReporter(ss) {
override def warning(pos: Position, msg: String) = debug {
super.warning(pos, msg)
}
override def error(pos: Position, msg: String) = debug {
super.error(pos, msg)
}
// let it ride
override def hasErrors = false
private def debug(body: => Unit): Unit = {
val pkgs = Set("nsc.ast.parser", "nsc.typechecker", "nsc.transform")
def compilerPackages(e: StackTraceElement): Boolean = pkgs exists (e.getClassName contains _)
def classname(e: StackTraceElement): String = (e.getClassName split """\.""").last
if (ss.Yissuedebug) echo {
((new Throwable).getStackTrace filter compilerPackages map classname).distinct mkString ("Issued from: ", ",", "\n")
}
body
}
}
It lies about having no errors so that the compiler won't abort early.
It would be invoked this way, with the reporter class on the "tool class path":
scalacm -toolcp repdir -Xreporter myrep.DebugReporter -Yissue-debug -deprecation errs.scala
where
$ scalacm -version
Scala compiler version 2.12.0-M2 -- Copyright 2002-2013, LAMP/EPFL
Sample output:
Issued from: Scanners$UnitScanner,Scanners$Scanner,Parsers$Parser,Parsers$Parser$$anonfun$templateStat$1,Parsers$Parser$$anonfun$topStat$1,Parsers$SourceFileParser,Parsers$UnitParser,SyntaxAnalyzer,SyntaxAnalyzer$ParserPhase
errs.scala:4: warning: Octal escape literals are deprecated, use \u0000 instead.
class C { def f: Int ; def g: Int = "" ; def h = "\000" }
^
Issued from: Contexts$ImmediateReporter,Contexts$ContextReporter,Contexts$Context,ContextErrors$ErrorUtils$,ContextErrors$TyperContextErrors$TyperErrorGen$,Typers$Typer,Analyzer$typerFactory$$anon$3
errs.scala:4: error: type mismatch;
found : String("")
required: Int
class C { def f: Int ; def g: Int = "" ; def h = "\000" }
^
Issued from: RefChecks$RefCheckTransformer,Transform$Phase
errs.scala:4: error: class C needs to be abstract, since method f is not defined
class C { def f: Int ; def g: Int = "" ; def h = "\000" }
^
one warning found
two errors found

Related

Conditional compilation with macros work for methods, but not for fields

There is a library X I'm working on, which depends on another library Y. To support multiple versions of Y, X publishes multiple artifacts named X_Y1.0, X_Y1.1, etc. This is done using multiple subprojects in SBT with version-specific source directories like src/main/scala-Y1.0 and src/main/scala-Y1.1.
So far, it worked well. One minor problem is that sometimes version-specific source directories are too much. Sometimes they require a lot of code duplication because it's syntactically impossible to extract just the tiny differences into separate files. Sometimes doing so introduces performance overhead or makes the code unreadable.
Trying to solve the issue, I've added macro annotations to selectively delete a part of the code. It works like this:
class MyClass {
#UntilB1_0
def f: Int = 1
#SinceB1_1
def f: Int = 2
}
However, it seems it only works for methods. When I try to use the macro on fields, compilation fails with an error saying "f is already defined as value f". Also, it doesn't work for classes and objects.
My suspicion is that macros are applied during compilation before resolving method overloads, but after basic checks like checking duplicate names.
Is there a way to make the macros work for fields, classes, and objects too?
Here's an example macro to demonstrate the issue.
import scala.annotation.{compileTimeOnly, StaticAnnotation}
import scala.language.experimental.macros
import scala.reflect.macros.blackbox
#compileTimeOnly("enable macro paradise to expand macro annotations")
class Delete extends StaticAnnotation {
def macroTransform(annottees: Any*): Any = macro DeleteMacro.impl
}
object DeleteMacro {
def impl(c: blackbox.Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
c.Expr[Nothing](EmptyTree)
}
}
When the annotation #Delete is used on methods, it works.
class MyClass {
#Delete
def f: Int = 1
def f: Int = 2
}
// new MyClass().f == 2
However, it doesn't work for fields.
class MyClass {
#Delete
val f: Int = 1
val f: Int = 2
}
// error: f is already defined as value f
First of all, good idea :)
It is a strange (and quite uncontrollable) behaviour, and I think that what you want to do is difficult to perform with macros.
To understand why you expansions doesn't work, I tried to print all the scalac phases.
Your expansion works, indeed giving this code:
class Foo {
#Delete
lazy val x : Int = 12
val x : Int = 10
#Delete
def a : Int = 10
def a : Int = 12
}
the code printed after typer is:
package it.unibo {
class Foo extends scala.AnyRef {
def <init>(): it.unibo.Foo = {
Foo.super.<init>();
()
};
<empty>; //val removed
private[this] val x: Int = 10;
<stable> <accessor> def x: Int = Foo.this.x;
<empty>; //def removed
def a: Int = 12
};
...
}
But, unfortunately, the error will be thrown anyway, I'm going to explain why this happens.
In scalac, macros are expanded -- at least in Scala 2.13 -- during the packageobjects phases (so after the parser and namer phases).
Here, different things happen, such as (as said here):
infers types,
checks whether types match,
searches for implicit arguments and adds them to trees,
does implicit conversions,
checks whether all type operations are allowed (for example type cannot be a subtype of itself),
resolves overloading,
type-checks parent references,
checks type violations,
searches for implicits ,
expands macros,
and creates additional methods for case classes (like apply or copy).
The essential problem here is that we cannot change the order, so it happens that invalid val references are checked before the method overloading, and macros expansion happen before method overloading check. For this reason #delete works with methods but it doesn't work with vals.
To solve your problem, I think that is necessary to use compiler plugin, here you can add a phase before the namer, so no error will be thrown. Build compiler plugin is more difficult of writing macros, but I think that is the best option for your case.

Scala eta expression ambiguous reference doesn't list all overloaded methods

I'm using Scala 2.12.1. In the Interpreter I make an Int val:
scala> val someInt = 3
someInt: Int = 3
Then I tried to use the eta expansion and get the following error:
scala> someInt.== _
<console>:13: error: ambiguous reference to overloaded definition,
both method == in class Int of type (x: Char)Boolean
and method == in class Int of type (x: Byte)Boolean
match expected type ?
someInt.== _
^
I see in the scaladoc that the Int class has more than 2 overloaded methods.
Question: is there a particular reason that the error message shows only 2 overloaded methods as opposed to listing all of them?
By the way, to specify which method you want to use the syntax is this:
scala> someInt.== _ : (Double => Boolean)
res9: Double => Boolean = $$Lambda$1103/1350894905#65e21ce3
The choice of the two methods that are listed seems to be more or less arbitrary. For example, this snippet:
class A
class B
class C
class Foo {
def foo(thisWontBeListed: C): Unit = {}
def foo(thisWillBeListedSecond: B): Unit = {}
def foo(thisWillBeListedFirst: A): Unit = {}
}
val x: Foo = new Foo
x.foo _
fails to compile with the error message:
error: ambiguous reference to overloaded definition,
both method foo in class Foo of type (thisWillBeListedFirst: this.A)Unit
and method foo in class Foo of type (thisWillBeListedSecond: this.B)Unit
match expected type ?
x.foo _
That is, it simply picks the two last methods that have been added to the body of the class, and lists them in the error message. Maybe those methods are stored in a List in reverse order, and the two first items are picked to compose the error message.
Why does it do it? It does it, because it has been programmed to do so, I'd take it as a given fact.
What was the main reason why it was programmed to do exactly this and not something else? That would be a primarily opinion-based question, and probably nobody except the authors of the Scala compiler themselves could give a definitive answer to that. I can think of at least three good reasons why only two conflicting methods are listed:
It's faster: why search for all conflicts, if it is already clear that a particular line of code does not compile? There is simply no reason to waste any time enumerating all possible ways how a particular line of code could be wrong.
The full list of conflicting methods is usually not needed anyway: the error messages are already quite long, and can at times be somewhat cryptic. Why aggravate it by printing an entire wall of error messages for a single line?
Implementation is easier: whenever you write a language interpreter of some sort, you quickly notice that returning all errors is somewhat more difficult than returning just the first error. Maybe in this particular case, it was decided not to bother collecting all possible conflicts.
PS: The order of the methods in the source code of Int seems to be different, but I don't know exactly what this "source" code has to do with the actual Int implementation: it seems to be a generated file without any implementations, it's there just so that #scaladoc has something to process, the real implementation is elsewhere.

toList on Range with suffix notation causes type mismatch

I am just starting with Scala, and trying out some things on Range and List, I get something very strange with a very simple snippet. I use sublime to edit and execute these snippets:
val a = 1 to 10
println(a)
yields
Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
while
val a = 1 to 10
val b = a toList
println(a)
gives me the error:
/home/olivier/Dropbox/Projects/ProjectEuler/misc/scala/ch05_ex02.scala:5: error: type mismatch;
found : Unit
required: Int
println(a)
^
one error found
In the REPL, on the contrary, I do not get any error. Scala version is 2.9.2
This is caused by the way the compiler parses Suffix Notation (for methods of arity 0). It will try to parse it as Infix Notation (if possible). This causes the compiler to parse your code like this:
val a = 1 to 10
val b = a toList println(a)
Or specifically the latter line with dot notation:
val b = a.toList.apply(println(a))
List[A] has an apply method taking varargs of type A (in this case, an Int) and println returns Unit. That's the reason for this specific error message.
This style is frowned upon as specified in the Scala Documentation:
Suffix Notation
Scala allows methods of arity-0 to be invoked using suffix notation:
names.toList
// is the same as
names toList // Unsafe, don't use!
This style is unsafe, and should not be used. Since semicolons are optional, the compiler will attempt to treat it as an infix method if it can, potentially taking a term from the next line.
names toList
val answer = 42 // will not compile!
This may result in unexpected compile errors at best, and happily compiled faulty code at worst. Although the syntax is used by some DSLs, it should be considered deprecated, and avoided.
As of Scala 2.10, using suffix operator notation will result in a compiler warning.
As recommended, use the dot notation:
val b = a.toList
Or if you really want to, add a semicolon to denote the end of line:
val b = a toList;
Note the latter will emit a compiler warning, as stated in the docs:
[warn] postfix operator toList should be enabled
[warn] by making the implicit value scala.language.postfixOps visible.
[warn] This can be achieved by adding the import clause 'import scala.language.postfixOps'
[warn] or by setting the compiler option -language:postfixOps.
[warn] See the Scaladoc for value scala.language.postfixOps for a discussion
[warn] why the feature should be explicitly enabled.
[warn] val b = a toList;
[warn] ^
[warn] one warning found
In the REPL, on the contrary, I do not get any error.
Because the REPL executes on a line by line basis. As the toList expression isn't succeeded by the println expression, it compiles. If you enter paste mode (:paste) and copy it as a block of code, you'll see the same behavior.
More info can be found in this Scala user-group question
Use -Xprint:parser,typer to see how the code parsed and what types are inferred. The answer explains the interaction of postfix and infix parsing; but the error makes you go, "But toList doesn't even take an Int."
$ scala -Xprint:parser,typer
scala> :pa
// Entering paste mode (ctrl-D to finish)
1 to 10 toList
println("hi")
// Exiting paste mode, now interpreting.
[[syntax trees at end of parser]] // <console>
package $line3 {
object $read extends scala.AnyRef {
// ...
val res0 = 1.to(10).toList(println("hi"))
// ...
[[syntax trees at end of typer]] // <console>
package $line3 {
object $read extends scala.AnyRef {
//...
private[this] val <res0: error>: <error> = scala.this.Predef.intWrapper(1).to(10).toList.apply(println("hi"));
//...

Scala singleton factories and class constants

OK, in the question about 'Class Variables as constants', I get the fact that the constants are not available until after the 'official' constructor has been run (i.e. until you have an instance). BUT, what if I need the companion singleton to make calls on the class:
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
If I create companion object first, the 'new thing(x)' (in the companion) causes an error. However, if I define the class first, the 'x * someConst' (in the class definition) causes an error.
I also tried placing the class definition inside the singleton.
object thing {
var someConst = 42
def apply(x: Int) = new thing(x)
class thing(x: Int) {
val field = x * someConst
override def toString = "val: " + field
}
}
However, doing this gives me a 'thing.thing' type object
val t = thing(2)
results in
t: thing.thing = val: 84
The only useful solution I've come up with is to create an abstract class, a companion and an inner class (which extends the abstract class):
abstract class thing
object thing {
val someConst = 42
def apply(x: Int) = new privThing(x)
class privThing(x: Int) extends thing {
val field = x * someConst
override def toString = "val: " + field
}
}
val t1 = thing(2)
val tArr: Array[thing] = Array(t1)
OK, 't1' still has type of 'thing.privThing', but it can now be treated as a 'thing'.
However, it's still not an elegant solution, can anyone tell me a better way to do this?
PS. I should mention, I'm using Scala 2.8.1 on Windows 7
First, the error you're seeing (you didn't tell me what it is) isn't a runtime error. The thing constructor isn't called when the thing singleton is initialized -- it's called later when you call thing.apply, so there's no circular reference at runtime.
Second, you do have a circular reference at compile time, but that doesn't cause a problem when you're compiling a scala file that you've saved on disk -- the compiler can even resolve circular references between different files. (I tested. I put your original code in a file and compiled it, and it worked fine.)
Your real problem comes from trying to run this code in the Scala REPL. Here's what the REPL does and why this is a problem in the REPL. You're entering object thing and as soon as you finish, the REPL tries to compile it, because it's reached the end of a coherent chunk of code. (Semicolon inference was able to infer a semicolon at the end of the object, and that meant the compiler could get to work on that chunk of code.) But since you haven't defined class thing it can't compile it. You have the same problem when you reverse the definitions of class thing and object thing.
The solution is to nest both class thing and object thing inside some outer object. This will defer compilation until that outer object is complete, at which point the compiler will see the definitions of class thing and object thing at the same time. You can run import thingwrapper._ right after that to make class thing and object thing available in global scope for the REPL. When you're ready to integrate your code into a file somewhere, just ditch the outer class thingwrapper.
object thingwrapper{
//you only need a wrapper object in the REPL
object thing {
val someConst = 42
def apply(x: Int) = new thing(x)
}
class thing(x: Int) {
import thing.someConst
val field = x * someConst
override def toString = "val: " + field
}
}
Scala 2.12 or more could benefit for sip 23 which just (August 2016) pass to the next iteration (considered a “good idea”, but is a work-in-process)
Literal-based singleton types
Singleton types bridge the gap between the value level and the type level and hence allow the exploration in Scala of techniques which would typically only be available in languages with support for full-spectrum dependent types.
Scala’s type system can model constants (e.g. 42, "foo", classOf[String]).
These are inferred in cases like object O { final val x = 42 }. They are used to denote and propagate compile time constants (See 6.24 Constant Expressions and discussion of “constant value definition” in 4.1 Value Declarations and Definitions).
However, there is no surface syntax to express such types. This makes people who need them, create macros that would provide workarounds to do just that (e.g. shapeless).
This can be changed in a relatively simple way, as the whole machinery to enable this is already present in the scala compiler.
type _42 = 42.type
type Unt = ().type
type _1 = 1 // .type is optional for literals
final val x = 1
type one = x.type // … but mandatory for identifiers

What are the biggest differences between Scala 2.8 and Scala 2.7?

I've written a rather large program in Scala 2.7.5, and now I'm looking forward to version 2.8. But I'm curious about how this big leap in the evolution of Scala will affect me.
What will be the biggest differences between these two versions of Scala? And perhaps most importantly:
Will I need to rewrite anything?
Do I want to rewrite anything just to take advantage of some cool new feature?
What exactly are the new features of Scala 2.8 in general?
Taking the Leap
When you migrate, the compiler can provide you with some safety nets.
Compile your old code against 2.7.7
with -deprecation, and follow the
recommendations from all deprecation
warnings.
Update your code to use
unnnested packages. This can be done
mechanically by repeatedly running
this regular expression search
replace.
s/^(package com.example.project.*)\.(\w+)/$1\npackage $2/g
Compile with 2.8.0 compiler, using paranoid command line options -deprecation -Xmigration -Xcheckinit -Xstrict-warnings -Xwarninit
If you receive errors the error could not find implicit value for evidence parameter of type scala.reflect.ClassManifest[T], you need to add an implicit parameter (or equivalently, a context bound), on a type parameter.
Before:
scala> def listToArray[T](ls: List[T]): Array[T] = ls.toArray
<console>:5: error: could not find implicit value for evidence parameter of type scala.reflect.ClassManifest[T]
def listToArray[T](ls: List[T]): Array[T] = ls.toArray ^
After:
scala> def listToArray[T: Manifest](ls: List[T]): Array[T] = ls.toArray
listToArray: [T](ls: List[T])(implicit evidence$1: Manifest[T])Array[T]
scala> def listToArray[T](ls: List[T])(implicit m: Manifest[T]): Array[T] = ls.toArray
listToArray: [T](ls: List[T])(implicit m: Manifest[T])Array[T]
Any method that calls listToArray, and itself takes T as a type parameter, must also accept the Manifest as an implicit parameter. See the Arrays SID for details.
Before too long, you'll encounter an error like this:
scala> collection.Map(1 -> 2): Map[Int, Int]
<console>:6: error: type mismatch;
found : scala.collection.Map[Int,Int]
required: Map[Int,Int]
collection.Map(1 -> 2): Map[Int, Int]
^
You need to understand that the type Map is an alias in Predef for collection.immutable.Map.
object Predef {
type Map[A, B] = collection.immutable.Map[A, B]
val Map = collection.immutable.Map
}
There are three types named Map -- a read-only interface: collection.Map, an immutable implementation: collection.immutable.Map, and a mutable implementation: collection.mutable.Map. Furthermore, the library defines the behaviour in a parallel set of traits MapLike, but this is really an implementation detail.
Reaping the Benefits
Replace some method overloading with named and default parameters.
Use the generated copy method of case classes.
scala> case class Foo(a: Int, b: String)
defined class Foo
scala> Foo(1, "a").copy(b = "b")
res1: Foo = Foo(1,b)
Generalize your method signatures from List to Seq or Iterable or Traversable. Because collection classes are in a clean hierarchy, can you accept a more general type.
Integrate with Java libraries using Annotations. You can now specify nested annotations, and have fine-grained control over whether annotations are targeted to fields or methods. This helps to use Spring or JPA with Scala code.
There are many other new features that can be safely ignored as you start migrating, for example #specialized and Continuations.
You can find here a preview of new feature in Scala2.8 (April 2009), completed with recent this article (June 2009)
Named and Default Arguments
Nested Annotations
Package Objects
#specialized
Improved Collections (some rewrite might be needed here)
REPL will have command completion (more on that and other tricks in this article)
New Control Abstractions (continuation or break)
Enhancements (Swing wrapper, performances, ...)
"Rewriting code" is not an obligation (except for using some of the improved Collections), but some features like continuation (Wikipedia: an abstract representation of the control state, or the "rest of computation" or "rest of code to be executed") can give you some new ideas. A good introduction is found here, written by Daniel (who has also posted a much more detailed and specific answer in this thread).
Note: Scala on Netbeans seems to work with some 2.8 nightly-build (vs. the official page for 2.7.x)
VonC's answer is hard to improve on, so I won't even try to. I'll cover some other stuff not mentioned by him.
First, some deprecated stuff will go. If you have deprecation warnings in your code, it's likely it won't compile anymore.
Next, Scala's library is being expanded. Mostly, common little patterns such as catching exceptions into Either or Option, or converting an AnyRef into an Option with null mapped into None. These things can mostly pass unnoticed, but I'm getting tired of posting something on the blog and later having someone tell me it's already on Scala 2.8. Well, actually, I'm not getting tired of it, but, rather, and happily, used to it. And I'm not talking here about the Collections, which are getting a major revision.
Now, it would be nice if people posted actual examples of such library improvements as answers. I'd happily upvote all such answers.
REPL is not getting just command-completion. It's getting a lot of stuff, including the ability to examine the AST for an object, or the ability to insert break points into code that fall into REPL.
Also, Scala's compiler is being modified to be able to provide fast partial compilation to IDEs, which means we can expect them to become much more "knowledgable" about Scala -- by querying the Scala compiler itself about the code.
One big change is likely to pass unnoticed by many, though it will decrease problems for library writers and users alike. Right now, if you write the following:
package com.mystuff.java.wrappers
import java.net._
You are importing not Java's net library, but com.mystuff.java's net library, as com, com.mystuff, com.mystuff.java and com.mystuff.java.wrappers all got within scope, and java can be found inside com.mystuff. With Scala 2.8, only wrappers gets scoped. Since, sometimes, you want some of the rest to be in Scope, an alternative package syntax is now allowed:
package com.mystuff.factories
package ligthbulbs
which is equivalent to:
package com.mystuff.factories {
package lightbulbs {
...
}
}
And happens to get both factories and lightbulbs into scope.
Will I need to rewrite anything?
def takesArray(arr: Array[AnyRef]) {…}
def usesVarArgs(obs: AnyRef*) {
takesArray(obs)
}
needs to become
def usesVarArgs(obs: AnyRef*) {
takesArray(obs.toArray)
}
I had to visit the IRC channel for that one, but then realized I should have started here.
Here's a checklist from Eric Willigers, who has been using Scala since 2.2. Some of this stuff will seem dated to more recent users.
* Explicitly import from outer packages *
Suppose we have
package a
class B
Change
package a.c
class D extends B
to
package a.c
import a.B
class D extends B
or
package a
package c
class D extends B
* Use fully qualified package name when importing from outer package *
Suppose we have
package a.b
object O { val x = 1 }
Change
package a.b.c
import b.O.x
to
package a.b.c
import a.b.O.x
* When explicitly specifying type parameters in container method calls, add new type parameters *
Change
list.map[Int](f)
to
list.map[Int, List[Int]](f)
Change
map.transform[Value](g)
to
map.transform[Value, Map[Key, Value]](g)
* Create sorted map using Ordering instead of conversion to Ordered *
[scalac] found : (String) => Ordered[String]
[scalac] required: Ordering[String]
[scalac] TreeMap[String, Any](map.toList: _*)(stringToCaseInsensitiveOrdered _)
* Import the implicit conversions that replace scala.collection.jcl *
* Immutable Map .update becomes .updated *
*** Migrate from newly deprecated List methods --
* elements
* remove
* sort
* List.flatten(someList)
* List.fromString(someList, sep)
* List.make
*** Use List methods
* diff
* iterator
* filterNot
* sortWith
* someList.flatten
* someList.split(sep)
* List.fill
* classpath when using scala.tools.nsc.Settings *
http://thread.gmane.org/gmane.comp.lang.scala/18245/focus=18247
settings.classpath.value = System.getProperty("java.class.path")
* Avoid error: _ must follow method; cannot follow (Any) => Boolean *
Replace
list.filter(that.f _)
with
list.filter(that f _)
or
list.filter(that.f(_))
>
>
>
* Migrate from deprecated Enumeration methods iterator map *
Use Enumeration methods values.iterator values.map
* Migrate from deprecated Iterator.fromValues(a, b, c, d) *
Use Iterator(a, b, c, d)
* Avoid deprecated type Collection *
Use Iterable instead
* Change initialisation order *
Suppose we have
trait T {
val v
val w = v + v
}
Replace
class C extends T {
val v = "v"
}
with
class C extends {
val v = "v"
} with T
* Avoid unneeded val in for (val x <- ...) *
* Avoid trailing commas *