I am working in a Scala embedded DSL and macros are becoming a main tool for achieving my purposes. I am getting an error while trying to reuse a subtree from the incoming macro expression into the resulting one. The situation is quite complex, but (I hope) I have simplified it for its understanding.
Suppose we have this code:
val y = transform {
val x = 3
x
}
println(y) // prints 3
where 'transform' is the involved macro. Although it could seem it does absolutely nothing, it is really transforming the shown block into this expression:
3 match { case x => x }
It is done with this macro implementation:
def transform(c: Context)(block: c.Expr[Int]): c.Expr[Int] = {
import c.universe._
import definitions._
block.tree match {
/* {
* val xNam = xVal
* xExp
* }
*/
case Block(List(ValDef(_, xNam, _, xVal)), xExp) =>
println("# " + showRaw(xExp)) // prints Ident(newTermName("x"))
c.Expr(
Match(
xVal,
List(CaseDef(
Bind(xNam, Ident(newTermName("_"))),
EmptyTree,
/* xExp */ Ident(newTermName("x")) ))))
case _ =>
c.error(c.enclosingPosition, "Can't transform block to function")
block // keep original expression
}
}
Notice that xNam corresponds with the variable name, xVal corresponds with its associated value and finally xExp corresponds with the expression containing the variable. Well, if I print the xExp raw tree I get Ident(newTermName("x")), and that is exactly what is set in the case RHS. Since the expression could be modified (for instance x+2 instead of x), this is not a valid solution for me. What I want to do is to reuse the xExp tree (see the xExp comment) while altering the 'x' meaning (it is a definition in the input expression but will be a case LHS variable in the output one), but it launches a long error summarized in:
symbol value x does not exist in org.habla.main.Main$delayedInit$body.apply); see the error output for details.
My current solution consists on the parsing of the xExp to sustitute all the Idents with new ones, but it is totally dependent on the compiler internals, and so, a temporal workaround. It is obvious that the xExp comes along with more information that the offered by showRaw. How can I clean that xExp for allowing 'x' to role the case variable? Can anyone explain the whole picture of this error?
PS: I have been trying unsuccessfully to use the substitute* method family from the TreeApi but I am missing the basics to understand its implications.
Disassembling input expressions and reassembling them in a different fashion is an important scenario in macrology (this is what we do internally in the reify macro). But unfortunately, it's not particularly easy at the moment.
The problem is that input arguments of the macro reach macro implementation being already typechecked. This is both a blessing and a curse.
Of particular interest for us is the fact that variable bindings in the trees corresponding to the arguments are already established. This means that all Ident and Select nodes have their sym fields filled in, pointing to the definitions these nodes refer to.
Here is an example of how symbols work. I'll copy/paste a printout from one of my talks (I don't give a link here, because most of the info in my talks is deprecated by now, but this particular printout has everlasting usefulness):
>cat Foo.scala
def foo[T: TypeTag](x: Any) = x.asInstanceOf[T]
foo[Long](42)
>scalac -Xprint:typer -uniqid Foo.scala
[[syntax trees at end of typer]]// Scala source: Foo.scala
def foo#8339
[T#8340 >: Nothing#4658 <: Any#4657]
(x#9529: Any#4657)
(implicit evidence$1#9530: TypeTag#7861[T#8341])
: T#8340 =
x#9529.asInstanceOf#6023[T#8341];
Test#14.this.foo#8339[Long#1641](42)(scala#29.reflect#2514.`package`#3414.mirror#3463.TypeTag#10351.Long#10361)
To recap, we write a small snippet and then compile it with scalac, asking the compiler to dump the trees after the typer phase, printing unique ids of the symbols assigned to trees (if any).
In the resulting printout we can see that identifiers have been linked to corresponding definitions. For example, on the one hand, the ValDef("x", ...), which represents the parameter of the method foo, defines a method symbol with id=9529. On the other hand, the Ident("x") in the body of the method got its sym field set to the same symbol, which establishes the binding.
Okay, we've seen how bindings work in scalac, and now is the perfect time to introduce a fundamental fact.
If a symbol has been assigned to an AST node,
then subsequent typechecks will never reassign it.
This is why reify is hygienic. You can take a result of reify and insert it into an arbitrary tree (that possibly defines variables with conflicting names) - the original bindings will remain intact. This works because reify preserves the original symbols, so subsequent typechecks won't rebind reified AST nodes.
Now we're all set to explain the error you're facing:
symbol value x does not exist in org.habla.main.Main$delayedInit$body.apply); see the error output for details.
The argument of the transform macro contains both a definition and a reference to a variable x. As we've just learned, this means that the corresponding ValDef and Ident will have their sym fields synchronized. So far, so good.
However unfortunately the macro corrupts the established binding. It recreates the ValDef, but doesn't clean up the sym field of the corresponding Ident. Subsequent typecheck assigns a fresh symbol to the newly created ValDef, but doesn't touch the original Ident that is copied to the result verbatim.
After the typecheck, the original Ident points to a symbol that no longer exists (this is exactly what the error message was saying :)), which leads to a crash during bytecode generation.
So how do we fix the error? Unfortunately there is no easy answer.
One option would be to utilize c.resetLocalAttrs, which recursively erases all symbols in a given AST node. Subsequent typecheck will then reestablish the bindings granted that the code you generated doesn't mess with them (if, for example, you wrap xExp in a block that itself defines a value named x, then you're in trouble).
Another option is to fiddle with symbols. For example, you could write your own resetLocalAttrs that only erases corrupted bindings and doesn't touch the valid ones. You could also try and assign symbols by yourself, but that's a short road to madness, though sometimes one is forced to walk it.
Not cool at all, I agree. We're aware of that and intend to try and fix this fundamental issue sometimes. However right now our hands are full with bugfixing before the final 2.10.0 release, so we won't be able to address the problem in the nearest future. upd. See https://groups.google.com/forum/#!topic/scala-internals/rIyJ4yHdPDU for some additional information.
Bottom line. Bad things happen, because bindings get messed up. Try resetLocalAttrs first, and if it doesn't work, prepare yourself for a chore.
Related
So, I was trying to learn about Continuation. I came across with the following saying (link):
Say you're in the kitchen in front of the refrigerator, thinking about a sandwich. You take a continuation right there and stick it in your pocket. Then you get some turkey and bread out of the refrigerator and make yourself a sandwich, which is now sitting on the counter. You invoke the continuation in your pocket, and you find yourself standing in front of the refrigerator again, thinking about a sandwich. But fortunately, there's a sandwich on the counter, and all the materials used to make it are gone. So you eat it. :-) — Luke Palmer
Also, I saw a program in Scala:
var k1 : (Unit => Sandwich) = null
reset {
shift { k : Unit => Sandwich) => k1 = k }
makeSandwich
}
val x = k1()
I don't really know the syntax of Scala (looks similar to Java and C mixed together) but I would like to understand the concept of Continuation.
Firstly, I tried to run this program (by adding it into main). But it fails, I think that it has a syntax error due to the ) near Sandwich but I'm not sure. I removed it but it still does not compile.
How to create a fully compiled example that shows the concept of the story above?
How this example shows the concept of Continuation.
In the link above there was the following saying: "Not a perfect analogy in Scala because makeSandwich is not executed the first time through (unlike in Scheme)". What does it mean?
Since you seem to be more interested in the concept of the "continuation" rather than specific code, let's forget about that code for a moment (especially because it is quite old and I don't really like those examples because IMHO you can't understand them correctly unless you already know what a continuation is).
Note: this is a very long answer with some attempts to describe what a continuations is and why it is useful. There are some examples in Scala-like pseudo-code none of which can actually be compiled and run (there is just one compilable example at the very end and it references another example from the middle of the answer). Expect to spend a significant amount of time just reading this answer.
Intro to continuations
Probably the first thing you should do to understand a continuation is to forget about how modern compilers for most of the imperative languages work and how most of the modern CPUs work and particularly the idea of the call stack. This is actually implementation details (although quite popular and quite useful in practice).
Assume you have a CPU that can execute some sequence of instructions. Now you want to have a high level languages that support the idea of methods that can call each other. The obvious problem you face is that the CPU needs some "forward only" sequence of commands but you want some way to "return" results from a sub-program to the caller. Conceptually it means that you need to have some way to store somewhere before the call all the state of the caller method that is required for it to continue to run after the result of the sub-program is computed, pass it to the sub-program and then ask the sub-program at the end to continue execution from that stored state. This stored state is exactly a continuation. In most of the modern environments those continuations are stored on the call stack and often there are some assembly instructions specifically designed to help handling it (like call and return). But again this is just implementation details. Potentially they might be stored in an arbitrary way and it will still work.
So now let's re-iterate this idea: a continuation is a state of the program at some point that is enough to continue its execution from that point, typically with no additional input or some small known input (like a return value of the called method). Running a continuation is different from a method call in that usually continuation never explicitly returns execution control back to the caller, it can only pass it to another continuation. Potentially you can create such a state yourself, but in practice for the feature to be useful you need some support from the compiler to build continuations automatically or emulate it in some other way (this is why the Scala code you see requires a compiler plugin).
Asynchronous calls
Now there is an obvious question: why continuations are useful at all? Actually there are a few more scenarios besides the simple "return" case. One such scenario is asynchronous programming. Actually if you do some asynchronous call and provide a callback to handle the result, this can be seen as passing a continuation. Unfortunately most of the modern languages do not support automatic continuations so you have to grab all the relevant state yourself. Another problem appears if you have some logic that needs a sequence of many async calls. And if some of the calls are conditional, you easily get to the callbacks hell. The way continuations help you avoid it is by allowing you build a method with effectively inverted control flow. With typical call it is the caller that knows the callee and expects to get a result back in a synchronous way. With continuations you can write a method with several "entry points" (or "return to points") for different stages of the processing logic that you can just pass to some other method and that method can still return to exactly that position.
Consider following example (in pseudo-code that is Scala-like but is actually far from the real Scala in many details):
def someBusinessLogic() = {
val userInput = getIntFromUser()
val firstServiceRes = requestService1(userInput)
val secondServiceRes = if (firstServiceRes % 2 == 0) requestService2v1(userInput) else requestService2v2(userInput)
showToUser(combineUserInputAndResults(userInput,secondServiceRes))
}
If all those calls a synchronous blocking calls, this code is easy. But assume all those get and request calls are asynchronous. How to re-write the code? The moment you put the logic in callbacks you loose the clarity of the sequential code. And here is where continuations might help you:
def someBusinessLogicCont() = {
// the method entry point
val userInput
getIntFromUserAsync(cont1, captureContinuationExpecting(entry1, userInput))
// entry/return point after user input
entry1:
val firstServiceRes
requestService1Async(userInput, captureContinuationExpecting(entry2, firstServiceRes))
// entry/return point after the first request to the service
entry2:
val secondServiceRes
if (firstServiceRes % 2 == 0) {
requestService2v1Async(userInput, captureContinuationExpecting(entry3, secondServiceRes))
// entry/return point after the second request to the service v1
entry3:
} else {
requestService2v2Async(userInput, captureContinuationExpecting(entry4, secondServiceRes))
// entry/return point after the second request to the service v2
entry4:
}
showToUser(combineUserInputAndResults(userInput, secondServiceRes))
}
It is hard to capture the idea in a pseudo-code. What I mean is that all those Async method never return. The only way to continue execution of the someBusinessLogicCont is to call the continuation passed into the "async" method. The captureContinuationExpecting(label, variable) call is supposed to create a continuation of the current method at the label with the input (return) value bound to the variable. With such a re-write you still has a sequential-looking business logic even with all those asynchronous calls. So now for a getIntFromUserAsync the second argument looks like just another asynchronous (i.e. never-returning) method that just requires one integer argument. Let's call this type Continuation[T]
trait Continuation[T] {
def continue(value: T):Nothing
}
Logically Continuation[T] looks like a function T => Unit or rather T => Nothing where Nothing as the return type signifies that the call actually never returns (note, in actual Scala implementation such calls do return, so no Nothing there, but I think conceptually it is easy to think about no-return continuations).
Internal vs external iteration
Another example is a problem of iteration. Iteration can be internal or external. Internal iteration API looks like this:
trait CollectionI[T] {
def forEachInternal(handler: T => Unit): Unit
}
External iteration looks like this:
trait Iterator[T] {
def nextValue(): Option[T]
}
trait CollectionE[T] {
def forEachExternal(): Iterator[T]
}
Note: often Iterator has two method like hasNext and nextValue returning T but it will just make the story a bit more complicated. Here I use a merged nextValue returning Option[T] where the value None means the end of the iteration and Some(value) means the next value.
Assuming the Collection is implemented by something more complicated than an array or a simple list, for example some kind of a tree, there is a conflict here between the implementer of the API and the API user if you use typical imperative language. And the conflict is over the simple question: who controls the stack (i.e. the easy to use state of the program)? The internal iteration is easier for the implementer because he controls the stack and can easily store whatever state is needed to move to the next item but for the API user the things become tricky if she wants to do some aggregation of the stored data because now she has to save the state between the calls to the handler somewhere. Also you need some additional tricks to let the user stop the iteration at some arbitrary place before the end of the data (consider you are trying to implement find via forEach). Conversely the external iteration is easy for the user: she can store all the state necessary to process data in any way in local variables but the API implementer now has to store his state between calls to the nextValue somewhere else. So fundamentally the problem arises because there is only one place to easily store the state of "your" part of the program (the call stack) and two conflicting users for that place. It would be nice if you could just have two different independent places for the state: one for the implementer and another for the user. And continuations provide exactly that. The idea is that we can pass execution control between two methods back and forth using two continuations (one for each part of the program). Let's change the signatures to:
// internal iteration
// continuation of the iterator
type ContIterI[T] = Continuation[(ContCallerI[T], ContCallerLastI)]
// continuation of the caller
type ContCallerI[T] = Continuation[(T, ContIterI[T])]
// last continuation of the caller
type ContCallerLastI = Continuation[Unit]
// external iteration
// continuation of the iterator
type ContIterE[T] = Continuation[ContCallerE[T]]
// continuation of the caller
type ContCallerE[T] = Continuation[(Option[T], ContIterE[T])]
trait Iterator[T] {
def nextValue(cont : ContCallerE[T]): Nothing
}
trait CollectionE[T] {
def forEachExternal(): Iterator[T]
}
trait CollectionI[T] {
def forEachInternal(cont : ContCallerI[T]): Nothing
}
Here ContCallerI[T] type, for example, means that this is a continuation (i.e. a state of the program) the expects two input parameters to continue running: one of type T (the next element) and another of type ContIterI[T] (the continuation to switch back). Now you can see that the new forEachInternal and the new forEachExternal+Iterator have almost the same signatures. The only difference in how the end of the iteration is signaled: in one case it is done by returning None and in other by passing and calling another continuation (ContCallerLastI).
Here is a naive pseudo-code implementation of a sum of elements in an array of Int using these signatures (an array is used instead of something more complicated to simplify the example):
class ArrayCollection[T](val data:T[]) : CollectionI[T] {
def forEachInternal(cont0 : ContCallerI[T], lastCont: ContCallerLastI): Nothing = {
var contCaller = cont0
for(i <- 0 to data.length) {
val contIter = captureContinuationExpecting(label, contCaller)
contCaller.continue(data(i), contIter)
label:
}
}
}
def sum(arr: ArrayCollection[Int]): Int = {
var sum = 0
val elem:Int
val iterCont:ContIterI[Int]
val contAdd0 = captureContinuationExpecting(labelAdd, elem, iterCont)
val contLast = captureContinuation(labelReturn)
arr.forEachInternal(contAdd0, contLast)
labelAdd:
sum += elem
val contAdd = captureContinuationExpecting(labelAdd, elem, iterCont)
iterCont.continue(contAdd)
// note that the code never execute this line, the only way to jump out of labelAdd is to call contLast
labelReturn:
return sum
}
Note how both implementations of the forEachInternal and of the sum methods look fairly sequential.
Multi-tasking
Cooperative multitasking also known as coroutines is actually very similar to the iterations example. Cooperative multitasking is an idea that the program can voluntarily give up ("yield") its execution control either to the global scheduler or to another known coroutine. Actually the last (re-written) example of sum can be seen as two coroutines working together: one doing iteration and another doing summation. But more generally your code might yield its execution to some scheduler that then will select which other coroutine to run next. And what the scheduler does is manages a bunch of continuations deciding which to continue next.
Preemptive multitasking can be seen as a similar thing but the scheduler is run by some hardware interruption and then the scheduler needs a way to create a continuation of the program being executed just before the interruption from the outside of that program rather than from the inside.
Scala examples
What you see is a really old article that is referring to Scala 2.8 (while current versions are 2.11, 2.12, and soon 2.13). As #igorpcholkin correctly pointed out, you need to use a Scala continuations compiler plugin and library. The sbt compiler plugin page has an example how to enable exactly that plugin (for Scala 2.12 and #igorpcholkin's answer has the magic strings for Scala 2.11):
val continuationsVersion = "1.0.3"
autoCompilerPlugins := true
addCompilerPlugin("org.scala-lang.plugins" % "scala-continuations-plugin_2.12.2" % continuationsVersion)
libraryDependencies += "org.scala-lang.plugins" %% "scala-continuations-library" % continuationsVersion
scalacOptions += "-P:continuations:enable"
The problem is that plugin is semi-abandoned and is not widely used in practice. Also the syntax has changed since the Scala 2.8 times so it is hard to get those examples running even if you fix the obvious syntax bugs like missing ( here and there. The reason of that state is stated on the GitHub as:
You may also be interested in https://github.com/scala/async, which covers the most common use case for the continuations plugin.
What that plugin does is emulates continuations using code-rewriting (I suppose it is really hard to implement true continuations over the JVM execution model). And under such re-writings a natural thing to represent a continuation is some function (typically called k and k1 in those examples).
So now if you managed to read the wall of text above, you can probably interpret the sandwich example correctly. AFAIU that example is an example of using continuation as means to emulate "return". If we re-sate it with more details, it could go like this:
You (your brain) are inside some function that at some points decides that it wants a sandwich. Luckily you have a sub-routine that knows how to make a sandwich. You store your current brain state as a continuation into the pocket and call the sub-routine saying to it that when the job is done, it should continue the continuation from the pocket. Then you make a sandwich according to that sub-routine messing up with your previous brain state. At the end of the sub-routine it runs the continuation from the pocket and you return to the state just before the call of the sub-routine, forget all your state during that sub-routine (i.e. how you made the sandwich) but you can see the changes in the outside world i.e. that the sandwich is made now.
To provide at least one compilable example with the current version of the scala-continuations, here is a simplified version of my asynchronous example:
case class RemoteService(private val readData: Array[Int]) {
private var readPos = -1
def asyncRead(callback: Int => Unit): Unit = {
readPos += 1
callback(readData(readPos))
}
}
def readAsyncUsage(rs1: RemoteService, rs2: RemoteService): Unit = {
import scala.util.continuations._
reset {
val read1 = shift(rs1.asyncRead)
val read2 = if (read1 % 2 == 0) shift(rs1.asyncRead) else shift(rs2.asyncRead)
println(s"read1 = $read1, read2 = $read2")
}
}
def readExample(): Unit = {
// this prints 1-42
readAsyncUsage(RemoteService(Array(1, 2)), RemoteService(Array(42)))
// this prints 2-1
readAsyncUsage(RemoteService(Array(2, 1)), RemoteService(Array(42)))
}
Here remote calls are emulated (mocked) with a fixed data provided in arrays. Note how readAsyncUsage looks like a totally sequential code despite the non-trivial logic of which remote service to call in the second read depending on the result of the first read.
For full example you need prepare Scala compiler to use continuations and also use a special compiler plugin and library.
The simplest way is a create a new sbt.project in IntellijIDEA with the following files: build.sbt - in the root of the project, CTest.scala - inside main/src.
Here is contents of both files:
build.sbt:
name := "ContinuationSandwich"
version := "0.1"
scalaVersion := "2.11.6"
autoCompilerPlugins := true
addCompilerPlugin(
"org.scala-lang.plugins" % "scala-continuations-plugin_2.11.6" % "1.0.2")
libraryDependencies +=
"org.scala-lang.plugins" %% "scala-continuations-library" % "1.0.2"
scalacOptions += "-P:continuations:enable"
CTest.scala:
import scala.util.continuations._
object CTest extends App {
case class Sandwich()
def makeSandwich = {
println("Making sandwich")
new Sandwich
}
var k1 : (Unit => Sandwich) = null
reset {
shift { k : (Unit => Sandwich) => k1 = k }
makeSandwich
}
val x = k1()
}
What the code above essentially does is calling makeSandwich function (in a convoluted manner). So execution result would be just printing "Making sandwich" into console. The same result would be achieved without continuations:
object CTest extends App {
case class Sandwich()
def makeSandwich = {
println("Making sandwich")
new Sandwich
}
val x = makeSandwich
}
So what's the point? My understanding is that we want to "prepare a sandwich", ignoring the fact that we may be not ready for that. We mark a point of time where we want to return to after all necessary conditions are met (i.e. we have all necessary ingredients ready). After we fetch all ingredients we can return to the mark and "prepare a sandwich", "forgetting that we were unable to do that in past". Continuations allow us to "mark point of time in past" and return to that point.
Now step by step. k1 is a variable to hold a pointer to a function which should allow to "create sandwich". We know it because k1 is declared so: (Unit => Sandwich).
However initially the variable is not initialized (k1 = null, "there are no ingredients to make a sandwich yet"). So we can't call the function preparing sandwich using that variable yet.
So we mark a point of execution where we want to return to (or point of time in past we want to return to) using "reset" statement.
makeSandwich is another pointer to a function which actually allows to make a sandwich. It's the last statement of "reset block" and hence it is passed to "shift" (function) as argument (shift { k : (Unit => Sandwich).... Inside shift we assign that argument to k1 variable k1 = k thus making k1 ready to be called as a function. After that we return to execution point marked by reset. The next statement is execution of function pointed to by k1 variable which is now properly initialized so finally we call makeSandwich which prints "Making sandwich" to a console. It also returns an instance of sandwich class which is assigned to x variable.
Not sure, probably it means that makeSandwich is not called inside reset block but just afterwards when we call it as k1().
Here’s my thoughts on the question. Can anyone confirm, deny, or elaborate?
I wrote:
Scala doesn’t unify covariant List[A] with a GLB ⊤ assigned to List[Int], bcz afaics in subtyping “biunification” the direction of assignment matters. Thus None must have type Option[⊥] (i.e. Option[Nothing]), ditto Nil type List[Nothing] which can’t accept assignment from an Option[Int] or List[Int] respectively. So the value restriction problem originates from directionless unification and global biunification was thought to be undecidable until the recent research linked above.
You may wish to view the context of the above comment.
ML’s value restriction will disallow parametric polymorphism in (formerly thought to be rare but maybe more prevalent) cases where it would otherwise be sound (i.e. type safe) to do so such as especially for partial application of curried functions (which is important in functional programming), because the alternative typing solutions create a stratification between functional and imperative programming as well as break encapsulation of modular abstract types. Haskell has an analogous dual monomorphisation restriction. OCaml has a relaxation of the restriction in some cases. I elaborated about some of these details.
EDIT: my original intuition as expressed in the above quote (that the value restriction may be obviated by subtyping) is incorrect. The answers IMO elucidate the issue(s) well and I’m unable to decide which in the set containing Alexey’s, Andreas’, or mine, should be the selected best answer. IMO they’re all worthy.
As I explained before, the need for the value restriction -- or something similar -- arises when you combine parametric polymorphism with mutable references (or certain other effects). That is completely independent from whether the language has type inference or not or whether the language also allows subtyping or not. A canonical counter example like
let r : ∀A.Ref(List(A)) = ref [] in
r := ["boo"];
head(!r) + 1
is not affected by the ability to elide the type annotation nor by the ability to add a bound to the quantified type.
Consequently, when you add references to F<: then you need to impose a value restriction to not lose soundness. Similarly, MLsub cannot get rid of the value restriction. Scala enforces a value restriction through its syntax already, since there is no way to even write the definition of a value that would have polymorphic type.
It's much simpler than that. In Scala values can't have polymorphic types, only methods can. E.g. if you write
val id = x => x
its type isn't [A] A => A.
And if you take a polymorphic method e.g.
def id[A](x: A): A = x
and try to assign it to a value
val id1 = id
again the compiler will try (and in this case fail) to infer a specific A instead of creating a polymorphic value.
So the issue doesn't arise.
EDIT:
If you try to reproduce the http://mlton.org/ValueRestriction#_alternatives_to_the_value_restriction example in Scala, the problem you run into isn't the lack of let: val corresponds to it perfectly well. But you'd need something like
val f[A]: A => A = {
var r: Option[A] = None
{ x => ... }
}
which is illegal. If you write def f[A]: A => A = ... it's legal but creates a new r on each call. In ML terms it would be like
val f: unit -> ('a -> 'a) =
fn () =>
let
val r: 'a option ref = ref NONE
in
fn x =>
let
val y = !r
val () = r := SOME x
in
case y of
NONE => x
| SOME y => y
end
end
val _ = f () 13
val _ = f () "foo"
which is allowed by the value restriction.
That is, Scala's rules are equivalent to only allowing lambdas as polymorphic values in ML instead of everything value restriction allows.
EDIT: this answer was incorrect before. I have completely rewritten the explanation below to gather my new understanding from the comments under the answers by Andreas and Alexey.
The edit history and the history of archives of this page at archive.is provides a recording of my prior misunderstanding and discussion. Another reason I chose to edit rather than delete and write a new answer, is to retain the comments on this answer. IMO, this answer is still needed because although Alexey answers the thread title correctly and most succinctly—also Andreas’ elaboration was the most helpful for me to gain understanding—yet I think the layman reader may require a different, more holistic (yet hopefully still generative essence) explanation in order to quickly gain some depth of understanding of the issue. Also I think the other answers obscure how convoluted a holistic explanation is, and I want naive readers to have the option to taste it. The prior elucidations I’ve found don’t state all the details in English language and instead (as mathematicians tend to do for efficiency) rely on the reader to discern the details from the nuances of the symbolic programming language examples and prerequisite domain knowledge (e.g. background facts about programming language design).
The value restriction arises where we have mutation of referenced1 type parametrised objects2. The type unsafety that would result without the value restriction is demonstrated in the following MLton code example:
val r: 'a option ref = ref NONE
val r1: string option ref = r
val r2: int option ref = r
val () = r1 := SOME "foo"
val v: int = valOf (!r2)
The NONE value (which is akin to null) contained in the object referenced by r can be assigned to a reference with any concrete type for the type parameter 'a because r has a polymorphic type a'. That would allow type unsafety because as shown in the example above, the same object referenced by r which has been assigned to both string option ref and int option ref can be written (i.e. mutated) with a string value via the r1 reference and then read as an int value via the r2 reference. The value restriction generates a compiler error for the above example.
A typing complication arises to prevent3 the (re-)quantification (i.e. binding or determination) of the type parameter (aka type variable) of a said reference (and the object it points to) to a type which differs when reusing an instance of said reference that was previously quantified with a different type.
Such (arguably bewildering and convoluted) cases arise for example where successive function applications (aka calls) reuse the same instance of such a reference. IOW, cases where the type parameters (pertaining to the object) for a reference are (re-)quantified each time the function is applied, yet the same instance of the reference (and the object it points to) being reused for each subsequent application (and quantification) of the function.
Tangentially, the occurrence of these is sometimes non-intuitive due to lack of explicit universal quantifier ∀ (since the implicit rank-1 prenex lexical scope quantification can be dislodged from lexical evaluation order by constructions such as let or coroutines) and the arguably greater irregularity (as compared to Scala) of when unsafe cases may arise in ML’s value restriction:
Andreas wrote:
Unfortunately, ML does not usually make the quantifiers explicit in its syntax, only in its typing rules.
Reusing a referenced object is for example desired for let expressions which analogous to math notation, should only create and evaluate the instantiation of the substitutions once even though they may be lexically substituted more than once within the in clause. So for example, if the function application is evaluated as (regardless of whether also lexically or not) within the in clause whilst the type parameters of substitutions are re-quantified for each application (because the instantiation of the substitutions are only lexically within the function application), then type safety can be lost if the applications aren’t all forced to quantify the offending type parameters only once (i.e. disallow the offending type parameter to be polymorphic).
The value restriction is ML’s compromise to prevent all unsafe cases while also preventing some (formerly thought to be rare) safe cases, so as to simplify the type system. The value restriction is considered a better compromise, because the early (antiquated?) experience with more complicated typing approaches that didn’t restrict any or as many safe cases, caused a bifurcation between imperative and pure functional (aka applicative) programming and leaked some of the encapsulation of abstract types in ML functor modules. I cited some sources and elaborated here. Tangentially though, I’m pondering whether the early argument against bifurcation really stands up against the fact that value restriction isn’t required at all for call-by-name (e.g. Haskell-esque lazy evaluation when also memoized by need) because conceptually partial applications don’t form closures on already evaluated state; and call-by-name is required for modular compositional reasoning and when combined with purity then modular (category theory and equational reasoning) control and composition of effects. The monomorphisation restriction argument against call-by-name is really about forcing type annotations, yet being explicit when optimal memoization (aka sharing) is required is arguably less onerous given said annotation is needed for modularity and readability any way. Call-by-value is a fine tooth comb level of control, so where we need that low-level control then perhaps we should accept the value restriction, because the rare cases that more complex typing would allow would be less useful in the imperative versus applicative setting. However, I don’t know if the two can be stratified/segregated in the same programming language in smooth/elegant manner. Algebraic effects can be implemented in a CBV language such as ML and they may obviate the value restriction. IOW, if the value restriction is impinging on your code, possibly it’s because your programming language and libraries lack a suitable metamodel for handling effects.
Scala makes a syntactical restriction against all such references, which is a compromise that restricts for example the same and even more cases (that would be safe if not restricted) than ML’s value restriction, but is more regular in the sense that we’ll not be scratching our head about an error message pertaining to the value restriction. In Scala, we’re never allowed to create such a reference. Thus in Scala, we can only express cases where a new instance of a reference is created when it’s type parameters are quantified. Note OCaml relaxes the value restriction in some cases.
Note afaik both Scala and ML don’t enable declaring that a reference is immutable1, although the object they point to can be declared immutable with val. Note there’s no need for the restriction for references that can’t be mutated.
The reason that mutability of the reference type1 is required in order to make the complicated typing cases arise, is because if we instantiate the reference (e.g. in for example the substitutions clause of let) with a non-parametrised object (i.e. not None or Nil4 but instead for example a Option[String] or List[Int]), then the reference won’t have a polymorphic type (pertaining to the object it points to) and thus the re-quantification issue never arises. So the problematic cases are due to instantiation with a polymorphic object then subsequently assigning a newly quantified object (i.e. mutating the reference type) in a re-quantified context followed by dereferencing (reading) from the (object pointed to by) reference in a subsequent re-quantified context. As aforementioned, when the re-quantified type parameters conflict, the typing complication arises and unsafe cases must be prevented/restricted.
Phew! If you understood that without reviewing linked examples, I’m impressed.
1 IMO to instead employ the phrase “mutable references” instead of “mutability of the referenced object” and “mutability of the reference type” would be more potentially confusing, because our intention is to mutate the object’s value (and its type) which is referenced by the pointer— not referring to mutability of the pointer of what the reference points to. Some programming languages don’t even explicitly distinguish when they’re disallowing in the case of primitive types a choice of mutating the reference or the object they point to.
2 Wherein an object may even be a function, in a programming language that allows first-class functions.
3 To prevent a segmentation fault at runtime due to accessing (read or write of) the referenced object with a presumption about its statically (i.e. at compile-time) determined type which is not the type that the object actually has.
4 Which are NONE and [] respectively in ML.
In the syntax analysis phase, an imperative compiler can build an AST out of nodes that already contain a type field that is set to null during construction, and then later, in the semantic analysis phase, fill in the types by assigning the declared/inferred types into the type fields.
How do purely functional languages handle this, where you do not have the luxury of assignment? Is the type-less AST mapped to a different kind of type-enriched AST? Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?
Are there purely functional programming tricks that help the compiler writer with this problem?
I usually rewrite a source (or an already several steps lowered) AST into a new form, replacing each expression node with a pair (tag, expression).
Tags are unique numbers or symbols which are then used by the next pass which derives type equations from the AST. E.g., a + b will yield something like { numeric(Tag_a). numeric(Tag_b). equals(Tag_a, Tag_b). equals(Tag_e, Tag_a).}.
Then types equations are solved (e.g., by simply running them as a Prolog program), and, if successful, all the tags (which are variables in this program) are now bound to concrete types, and if not, they're left as type parameters.
In a next step, our previous AST is rewritten again, this time replacing tags with all the inferred type information.
The whole process is a sequence of pure rewrites, no need to replace anything in your AST destructively. A typical compilation pipeline may take a couple of dozens of rewrites, some of them changing the AST datatype.
There are several options to model this. You may use the same kind of nullable data fields as in your imperative case:
data Exp = Var Name (Maybe Type) | ...
parse :: String -> Maybe Exp -- types are Nothings here
typeCheck :: Exp -> Maybe Exp -- turns Nothings into Justs
or even, using a more precise type
data Exp ty = Var Name ty | ...
parse :: String -> Maybe (Exp ())
typeCheck :: Exp () -> Maybe (Exp Type)
I cant speak for how it is supposed to be done, but I did do this in F# for a C# compiler here
The approach was basically - build an AST from the source, leaving things like type information unconstrained - So AST.fs basically is the AST which strings for the type names, function names, etc.
As the AST starts to be compiled to (in this case) .NET IL, we end up with more type information (we create the types in the source - lets call these type-stubs). This then gives us the information needed to created method-stubs (the code may have signatures that include type-stubs as well as built in types). From here we now have enough type information to resolve any of the type names, or method signatures in the code.
I store that in the file TypedAST.fs. I do this in a single pass, however the approach may be naive.
Now we have a fully typed AST you could then do things like compile it, fully analyze it, or whatever you like with it.
So in answer to the question "Does that mean I need to define two types per AST node, one for the syntax phase, and one for the semantic phase?", I cant say definitively that this is the case, but it is certainly what I did, and it appears to be what MS have done with Roslyn (although they have essentially decorated the original tree with type info IIRC)
"Are there purely functional programming tricks that help the compiler writer with this problem?"
Given the ASTs are essentially mirrored in my case, it would be possible to make it generic and transform the tree, but the code may end up (more) horrendous.
i.e.
type 'type AST;
| MethodInvoke of 'type * Name * 'type list
| ....
Like in the case when dealing with relational databases, in functional programming it is often a good idea not to put everything in a single data structure.
In particular, there may not be a data structure that is "the AST".
Most probably, there will be data structures that represent parsed expressions. One possible way to deal with type information is to assign a unique identifier (like an integer) to each node of the tree already during parsing and have some suitable data structure (like a hash map) that associates those node-ids with types. The job of the type inference pass, then, would be just to create this map.
I'm working on a mapper and wanted a typesafe way to capture class fieldnames for mapping and went with a syntax I'd used in C#:
case class Person(name: String, age: Int)
new Mapping[Person]() {
field(_.age).name("person_age").colType[java.lang.Integer]
field(_.name).name("person_name")
}
where def field(m: T => Unit): FieldMap
This triggers the following warnings:
Warning:(97, 13) a pure expression does nothing in statement position; you may be omitting necessary parentheses
field(_.age).name("person_age").colType[java.lang.Integer]
^
Warning:(98, 13) a pure expression does nothing in statement position; you may be omitting necessary parentheses
field(_.name).name("person_name")
^
So clearly that's not a desirable syntax. Any way I can tweak the signature of field to avoid the warning or is there a more idiomatic scala way of mapping fields in a typesafe manner?
Note: #sjrd's answer indeed gets rid of the warning, but the attempted feature doesn't seem feasible with scala reflection after all. My end goal is a Mapper that allows the specifying of T members in a compile time checked mannner, rather than strings, so it's less vulnerable to typo's and refactoring issues.
The field method takes a T => Unit function as parameter. Hence, the lambda _.age, which is equivalent to x => x.age, is typechecked as returning Unit. The compiler warns that you are using a pure expression (x.age) in statement position (expected type Unit), which basically means that the expression is useless, and might as well be removed.
There is a very simple symptomatic solution to your problem: replace m: T => Unit by m: T => Any. Now your expression x.age is not in statement position anymore, and the compiler is happy.
But your code suggests that there is something wrong a little bit deeper, since you obviously don't use the result of m anywhere. Why is m for anyway?
I stumbled over a particular problem with interfaces while debugging some code, where a called subroutine has a dummy argument of rank 2 but an actual argument of rank 1. The resulting difference in the arguments resulted in an invalid read.
To reproduce I created a small program (ignore the comments ! <> for now):
PROGRAM ptest
USE mtest ! <>
IMPLICIT NONE
REAL, ALLOCATABLE, DIMENSION(:) :: field
INTEGER :: n
REAL :: s
n = 10
ALLOCATE(field(n))
CALL RANDOM_NUMBER(field)
CALL stest(n, field, s)
WRITE(*,*) s
DEALLOCATE(field)
END PROGRAM
and a module
MODULE mtest ! <>
IMPLICIT NONE ! <>
CONTAINS ! <>
SUBROUTINE stest(n, field, erg)
INTEGER :: n
REAL, DIMENSION(n,n) :: field
REAL :: erg
erg = SUM(field)
END SUBROUTINE
END MODULE ! <>
As far as I understand, this subroutine gets an automatic (explicit?) interface from being placed in the module. The problem is, that the actual field has length 10, while the subroutine sums a field of length 10x10=100 which is clearly visible in valgrind as an invalid read.
Then I tested this same code without the module, i.e. all lines marked with ! <> got removed/commented. As a result, gfortran's -Wimplicit-interface threw a warning, but the code worked as before.
So my question is: What is the best way, to deal with such a situation? Should I always place a generic interface à la
INTERFACE stest
MODULE PROCEDURE stest
END INTERFACE
in the module? Or should I replace the definition of field with an deferred-shape array (i.e. REAL, ALLOCATABLE, DIMENSION(:,:) :: field)?
EDIT: To be more precise on my question, I don't want to solve this particular problem, but want to know, what to do, to get a better diagnostic output from the compiler.
E.g. the given code doesn't give an error message and does, in principle, produce a segmentation fault (though, the code doesn't notice it). Placing a generic interface produces at least an error, complaining, that no matching definition for stest is found, which is also not really helpful, especially in the case, where you don't have the source code. Only a deferred-shape array resulted in an understandable error message (rank mismatch).
And this is, were I'm wondering, why the automatic module interface doesn't give a similar warning/error message.
The compiler cannot warn you, because the code is legal! You just pass wrong n and a non-square number of points. For explicit shape arrays you are responsible for correct dimensions. Consider
ALLOCATE(field(1000))
CALL stest(10, field, s)
this code will work although the number of elements of the actual and dummy arguments is not the same. Maybe suggest to gfortran developers to check whether the dummy argument is not larger, but I am not sure how difficult that is.
The generic interface causes the compiler to check the TKR rules. No sequence association of arrays of different rank is allowed and the compilation will fail. Therefore it will disable all legal uses of passing arrays of different rank to explicit shape and assumed size dummy arguments and limit your possibilities.
What is the solution? Use explicit shape arrays for situations they are good for and use assumed shape arrays otherwise (possibly with the contiguous attribute). The generic interface might help too, but changes the semantics and limits the possible use.