Related
I often read some programming languages have "First Class" support for modules (OCaml, Scala, TypeScript[?]) and recently stumbled upon an answer on SO citing modules as first class citizens among the distinguishing features of Scala.
I thought I knew very well what Modular Programming means but after these incidents I'm beginning to doubt my understanding...
I think modules are nothing special but instances of certain classes that are acting as mini-libraries. The mini-library code goes into a class, objects of that class are the modules. You can pass them around as dependencies to any other class that requires the services provided by the module, so any decent OOPL has first class modules but apparently not!
What exactly is a module? How is it different than, say, a plain class or an object?
How is (1) related (or not) to the Modular Programming that we all know?
What exactly does it mean for a language to have first class modules? What are the benefits? what are the drawbacks if a languages lacks such feature?
A module, as well as a subroutine, is a way of organizing your code. When we develop programs, we pack instructions into subroutines, subroutines into structures, structures into packages, libraries, assemblies, frameworks, solutions, and so on. So, putting everything else aside, it is just a mechanism to organize your code.
The essential reason, why we use all those mechanisms, instead of just laying out our instructions linearly, is because the complexity of a program grows non-linearly with respect to its size. In other words, a program built from n pieces each having m instructions is easier to comprehend than a program which is built from n*m instructions. This is, of course, not always true (otherwise we can just split our program into arbitrary parts and be happy). In fact, for that to be true, we have to introduce one essential mechanism called abstraction. We can benefit from splitting a program into manageable subparts only if each part provides some sort of abstraction. For example, we can have, connect_to_database, query_for_students, sort_by_grade, and take_the_first_n abstractions packed as functions or subroutines, and is much easier to understand the code which is expressed in terms of those abstractions, rather than trying to understand the code in which all those functions are inlined.
So now we have functions and it is natural to introduce the next level of organization -- collections of functions. It is common to see that some functions build families around some common abstraction, e.g., student_name, student_grade, student_courses, etc, they all revolve around the same abstraction student. The same is for connection_establish, connection_close, etc. Therefore we need some mechanism that will tie together those functions. Here we are starting to have options. Some languages took the OOP path, in which objects and classes are the units of the organization. Where a bunch of functions and a state is called an object. Other languages took a different path and decided to combine functions into static structures called modules. The main difference is that a module is a static, compile-time structure, where objects are runtime structures that have to be created in runtime to be used. As a result, naturally, objects tend to contain state, while modules do not (and contain only code). And objects are inherently regular values, which you can assign to variables, store them in files, and do other manipulations which you can do with data. Classical modules, contrary to objects, do not have runtime representation, therefore you can't pass modules as parameters to your functions, store them in a list, and otherwise perform any computations on modules. This is basically what people mean by saying first class citizen - an ability to treat an entity as a simple value.
Back to composable programs. In order to make objects/modules composable, we need to be sure that they create abstractions. For functions abstraction boundary is clearly defined - it is the tuple of parameters. For objects, we have a notion of interfaces and classes. While for modules we have only interfaces. Since modules are inherently more simple (they do not include the state) we do not have to deal with their constructing and deconstructing, therefore we do not need a more complicated notion of a class. Both classes and interfaces is a way to classify objects and modules by some criteria so that we can reason about different modules without looking into the implementation, the same way as we did with connect_to_database, query_for_students, et al functions - we were reasoning about them only based on their name and interface (and probably documentation). Now we can have a class student or a module Student both defining an abstraction called student, so that we can save a lot of brain power, without having to deal with the way how are those students implemented.
And beyond making our programs easier to understand, abstractions give us another benefit -- generalization. Since we don't need to reason about the implementation of a function or a module, it means that all implementations are interchangeable to some degree. Therefore, we can write our programs so that they will express their behavior in a general way, without breaking the abstractions, and then choose particular instances when we run our programs. Objects are runtime instances and essentially it means that we can choose our implementation in runtime. Which is nice. Classes are, however, rarely first-class citizens, therefore we have to invent different cumbersome methods to make the selection, like the Abstract Factory and Builder design patterns. For modules, the situation is even worse, since they are inherently a compile-time structure, we have to choose our implementation at the program building/lining time. Which is not what people want to do in the modern world.
And here comes first-class modules, being an amalgamation of modules and objects, they give us the best of two worlds - an easy to reason about stateless structures, which are, at the same time, a pure first-class citizens, which you can store in a variable, put into list and select the desired implementation in runtime.
Speaking of OCaml, underneath the hood, first-class modules are simply a record of functions. In OCaml, you can even add state to the first-class module making it practically indistinguishable from an object. This brings us to another topic - in the real world, the separation between objects and structures is not that clear. For example, OCaml provides both modules and objects and you can put objects inside modules and even vice verse. In C/C++ we have compilation units, symbols visibility, opaque data types, and header files, which enables some sort of modular programming, as well as we have structures and namespaces. Therefore, the difference is sometimes hard to tell.
Therefore, to summarize. Modules are pieces of code with a well-defined interface to access this code. First class modules are modules which could be manipulated as a regular value, e.g., stored in a data structure, assigned a variable, and picked at runtime.
OCaml perspective here.
Modules and classes are very different.
First of all, classes in OCaml are a very specific (and complex) feature. To go into some detail, classes implement inheritance, row polymorphism and dynamic dispatch (aka virtual methods). It allows them to be highly flexible at the expense of some efficiency.
Modules, however, are quite a different thing altogether.
Indeed, you can see modules as atomic mini-libraries, and usually they are used to define a type and its accessors, but they are much more powerful than just that.
Modules allow you to create several types, as well as module types and submodules. Basically, they allow to create complex compartmentalization and abstraction.
Functors give you behavior similar to c++'s templates. Except they are safe. Basically, they are functions on modules, which allows you to parameterize a data structure or algorithm over some other module.
Modules are usually solved statically and therefore easy to inline, allowing you to write clear code without fear of a loss in efficiency.
Now, a first-class citizen is an entity that can be put in a variable, passed to a function and tested for equality. In a way, it means they will be dynamically evaluated.
For example, suppose you have a module Jpeg and a module Png that allow you to manipulate different kind of pics. Statically, you don't know what kind of image you'll need to display. So you can use first-class modules:
let get_img_type filename =
match Filename.extension filename with
| ".jpg" | ".jpeg" -> (module Jpeg : IMG_HANDLER)
| ".png" -> (module Png : IMG_HANDLER)
let display_img img_type filename =
let module Handler = (val img_type : IMG_HANDLER) in
Handler.display filename
The main differences between a module and an object usually are
Modules are second-class, i.e., they are rather static entities that cannot be passed around as values, while objects can.
Modules can contain types and all other forms of declarations (and types can be made abstract), while objects typically cannot.
However, as you note, there are languages where modules can be wrapped up as first-class values (e.g. Ocaml) and there are languages where objects can contain types (e.g. Scala). That blurs the line a little. There still tend to be various biases towards certain patterns, with different trade-offs made in the type systems. For example, objects focus on recursive types, while modules focus on type abstraction and allowing any definition. It is a very hard problem to support both at the same time without severe compromises, since that quickly leads to an undecidable type system.
As has been mentioned already, "modules", "classes" and "objects" are more like tendencies than strict formal definitions. And if you implement modules as objects for example, as I understand Scala does, then obviously there are no fundamental differences between them, but mostly just syntactic differences that make them more convenient for certain use cases.
In regards to OCaml specifically though, here's a practical example of something you cannot do with modules that you can do with classes because of fundamental differences in implementation:
Modules have functions, which can reference each other recursively using the rec and and keyword. A module can also "inherit" the implementation of another module using include and override its definitions. For example:
module Base = struct
let name = "base"
let print () = print_endline name
end
module Child = struct
include Base
let name = "child"
end
but because modules are early bound, that is, names are resolved at compile time, it's not possible to get Base.print to reference Child.name instead of Base.name. At least not without altering both Base and Child significantly to explicitly enable it:
module AbstractBase(T : sig val name : string end) = struct
let name = T.name
let print () = print_endline name
end
module Base = struct
include AbstractBase(struct let name = "base" end)
end
module Child = struct
include AbstractBase(struct let name = "child" end)
end
With classes on the other hand, overriding is trivial and the default:
class base = object(self)
method name = "base"
method print = print_endline self#name
end
class child = object
inherit base
method! name = "child"
end
Classes can reference themselves, through a variable conventionally named this or self (in OCaml you can name it whatever you want, but self is the convention). They are also late bound, meaning they are resolved at runtime and can therefore call method implementations that didn't exist when it was defined. This is called open recursion.
So why aren't modules late bound too? Primarily for performance reasons I think. Doing a dictionary search on the name of every function call will undoubtedly have a significant impact on execution time.
So, pattern matching in functional languages is pretty awesome. I wondering why most imperative languages haven't implemented this feature? To my understanding, Scala is the only "mainstream" imperative language that has pattern matching. The case/switch structure is just so much less powerful.
In particular, I am interested in whether the lack of pattern matching is because of technical reasons or historical reasons?
It's mostly historic. Pattern matching -- and more to the point, algebraic data types -- was invented around 1980 for the functional language Hope. From there it quickly made it into ML, and was later adopted in other functional languages like Miranda and Haskell. The mainstream imperative world usually takes a few decades longer to pick up new programming language ideas.
One reason that particularly hindered adoption is that the mainstream has long been dominated by object-oriented ideology. In that world, anything that isn't expressed by objects and subtyping is considered morally "wrong". One could argue that algebraic data types are kind of an antithesis to that.
Perhaps there are also some technical reasons that make it more natural in functional languages:
Regular scoping rules and fine-grained binding constructs for variables are the norm in functional languages, but less common in mainstream imperative languages.
Especially so since patterns bind immutable variables.
Type checking pattern matches relies on the more well-formed structure and rigidness of functional type systems, and their close ties to computational logic. Mainstream type systems are usually far away from that.
Algebraic data types require heap allocation (unless you want to waste a lot of space and prohibit recursion), and would be very inconvenient without garbage collection. However, GCs in mainstream languages, where they exist, are typically optimised for heavyweight objects, not lightweight functional data.
Until recently (more precisely: until Scala), it was believed that pattern matching was incompatible with representation ignorance (i.e. the defining characteristic of OO). Since OO is a major paradigm in mainstream languages, having a seemingly irreconcilable feature in a mainstream language seemingly didn't make sense.
In Scala, pattern matching is reconciled with OO, simply by having the match operations be method calls on an object. (Rather simple in hindsight, no?) In particular, matches are performed by calling methods on extractor objects, which, just like any other object, only have access to the public API of the object being examined, thus not breaking encapsulation.
A pattern matching library inspired by Scala, in which patterns are first-class objects themselves (inspired by F#'s Active Patterns) was added to Newspeak, a very dynamic language that takes OO very seriously. (Newspeak doesn't even have variables, just methods.)
Note that regular expressions are an example of a limited form of pattern matching. Polymorphic method dispatch can also be seen as an example of a limited form of pattern matching (without the extraction features). In fact, method dispatch is powerful enough to implement full pattern matching as evidenced by Scala and especially Newspeak (in the latter, pattern matching is even implemented as a library, completely separate from the language).
Here are my 2 cents. Take a simple Option pattern match:
val o = Some(1)
o match {
case Some(i) => i + 1
case None => 0
}
There are so many things going on here in Scala. Compiler checks that you have exhaustive match, creates a new variable i for the scope of the case statement and of course extracts the value from Option in a first place somehow.
Extracting a value would be doable in languages like Java. Implement unapply method(s) of some agreed upon interface and you are done. Now you can return values to a caller.
Providing this extracted value to the caller, which essentially requires a closure is not so convenient to do in regular OO languages without closure support. It can become quite ugly in Java7 where you would probably use Observer pattern.
If you add other pattern matching abilities of Scala in the mix like matching on specific types, i.e. case i: Int =>; using default clause _ when you want to (compiler has to check exhaustiveness somehow whether you use _ or not); additional checks like case i if i > 0 =>; and so on it quickly becomes very ugly from client side to use (think Java).
If you drop all those fancy pattern matching features your pattern match would be pretty much at the level of Java's switch statement.
It looks like it just would not be worth it, even if possible, to implement using anonymous classes without lambdas support and strong type system.
I would say that it is more for historical than technical reasons. Pattern matching works well with algebraic data types which have also historically been associated with functional languages.
Scala is probably a bad example of an imperative language with pattern matching because it tends to favour the functional style, though it doesn't enforce it.
An example of a modern, mostly imperative language with pattern matching is Rust.
Imperative and runs on the metal, but still has algebraic data types, pattern matching and other features that are more common to functional languages. But its' compiler implementation is a lot more complex than that of a C compiler
I am in the process of writing a toy compiler in scala. The target language itself looks like scala but is an open field for experiment.
After several large refactorings I can't find a good way to model my abstract syntax tree. I would like to use the facilities of scala's pattern matching, the problem is that the tree carries moving information (like types, symbols) along the compilation process.
I can see a couple of solutions, none of which I like :
case classes with mutable fields (I believe the scala compiler does this) : the problem is that those fields are not present a each stage of the compilation and thus have to be nulled (or Option'd) and it becomes really heavy to debug/write code. Moreover, if for exemple, I find a node with null type after the typing phase I have a really hard time finding the cause of the bug.
huge trait/case class hierarchy : something like Node, NodeWithSymbol, NodeWithType, ... Seems like a pain to write AND work with
something completly hand crafted with extractors
I'm also not sure if it is good practice to go with a fully immutable AST, especially in scala where there is no implicit sharing (because the compiler is not aware of immutability) and it could hurt performances to copy the tree all the time.
Can you think of an elegant pattern to model my tree using scala's powerful type system ?
TL;DR I prefer to keep the AST immutable and carry things like type information in a separate structure, e.g. a Map, that can be referred by IDs stored in the AST. But there is no perfect answer.
You're by no means the first to struggle with this question. Let me list some options:
1) Mutable structures that get updated at each phase. All the up and downsides you mention.
2) Traits/cake pattern. Feasible, but expensive (there's no sharing) and kinda ugly.
3) A new tree type at each phase. In some ways this is the theoretically cleanest. Each phase can deal only with a structure produced for it by the previous phase. Plus the same approach carries all the way from front end to back end. For instance, you may "desugar" at some point and having a new tree type means that downstream phase(s) don't have to even consider the possibility of node types that are eliminated by desugaring. Also, low level optimizations usually need IRs that are significantly lower level than the original AST. But this is also a lot of code since almost everything has to be recreated at each step. This approach can also be slow since there can be almost no data sharing between phases.
4) Label every node in the AST with an ID and use that ID to reference information in other data structures (maps and vectors and such) that hold information computed for each phase. In many ways this is my favorite. It retains immutability, maximizes sharing and minimizes the "excess" code you have to write. But you still have to deal with the potential for "missing" information that can be tricky to debug. It's also not as fast as the mutable option, though faster than any option that requires producing a new tree at each phase.
I recently started writing a toy verifier for a small language, and I am using the Kiama library for the parser, resolver and type checker phases.
Kiama is a Scala library for language processing. It enables convenient analysis and transformation of structured data. The programming styles supported by the library are based on well-known formal language processing paradigms, including attribute grammars, tree rewriting, abstract state machines, and pretty printing.
I'll try to summarise my (fairly limited) experience:
[+] Kiama comes with several examples, and the main contributor usually responds quickly to questions asked on the mailing list
[+] The attribute grammar paradigm allows for a nice separation into "immutable components" of the nodes, e.g., names and subnodes, and "mutable components", e.g., type information
[+] The library comes with a versatile rewriting system which - so far - covered all my use cases
[+] The library, e.g., the pretty printer, make nice examples of DSLs and of various functional patterns/approaches/ideas
[-] The learning curve it definitely steep, even with examples and the mailing list at hand
[-] Implementing the resolving phase in a "purely function" style (cf. my question) seems tricky, but a hybrid approach (which I haven't tried yet) seems to be possible
[-] The attribute grammar paradigm and the resulting separation of concerns doesn't make it obvious how to document the properties nodes have in the end (cf. my question)
[-] Rumour has it, that the attribute grammar paradigm does not yield the fastest implementations
Summarising my summary, I enjoy using Kiama a lot and I strongly recommend that you give it a try, or at least have a look at the examples.
(PS. I am not affiliated with Kiama)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
What are the most commonly held misconceptions about the Scala language, and what counter-examples exist to these?
UPDATE
I was thinking more about various claims I've seen, such as "Scala is dynamically typed" and "Scala is a scripting language".
I accept that "Scala is [Simple/Complex]" might be considered a myth, but it's also a viewpoint that's very dependent on context. My personal belief is that it's the very same features that can make Scala appear either simple or complex depending oh who's using them. Ultimately, the language just offers abstractions, and it's the way that these are used that shapes perceptions.
Not only that, but it has a certain tendency to inflame arguments, and I've not yet seen anyone change a strongly-held viewpoint on the topic...
Myth: That Scala’s “Option” and Haskell’s “Maybe” types won’t save you from null. :-)
Debunked: Why Scala's "Option" and Haskell's "Maybe" types will save you from null by James Iry.
Myth: Scala supports operator overloading.
Actually, Scala just has very flexible method naming rules and infix syntax for method invocation, with special rules for determining method precedence when the infix syntax is used with 'operators'. This subtle distinction has critical implications for the utility and potential for abuse of this language feature compared to true operator overloading (a la C++), as explained more thoroughly in James Iry's answer to this question.
Myth: methods and functions are the same thing.
In fact, a function is a value (an instance of one of the FunctionN classes), while a method is not. Jim McBeath explains the differences in greater detail. The most important practical distinctions are:
Only methods can have type parameters
Only methods can take implicit arguments
Only methods can have named and default parameters
When referring to a method, an underscore is often necessary to distinguish method invocation from partial function application (e.g. str.length evaluates to a number, while str.length _ evaluates to a zero-argument function).
I disagree with the argument that Scala is hard because you can use very advanced features to do hard stuff with it. The scalability of Scala means that you can write DSL abstractions and high-level APIs in Scala itself that otherwise would need a language extension. So to be fair you need to compare Scala libraries to other languages compilers. People don't say that C# is hard because (I assume, don't have first hand knowledge on this) the C# compiler is pretty impenetrable. For Scala it's all out in the open. But we need to get to a point where we make clear that most people don't need to write code on this level, nor should they do it.
I think a common misconception amongst many scala developers, those at EPFL (and yourself, Kevin) is that "scala is a simple language". The argument usually goes something like this:
scala has few keywords
scala reuses the same few constructs (e.g. PartialFunction syntax is used as the body of a catch block)
scala has a few simple rules which allow you to create library code (which may appear as if the language has special keywords/constructs). I'm thinking here of implicits; methods containing colons; allowed identifier symbols; the equivalence of X(a, b) and a X b with extractors. And so on
scala's declaration-site variance means that the type system just gets out of your way. No more wildcards and ? super T
My personal opinion is that this argument is completely and utterly bogus. Scala's type system taken together with implicits allows one to write frankly impenetrable code for the average developer. Any suggestion otherwise is just preposterous, regardless of what the above "metrics" might lead you to think. (Note here that those who I've seen scoffing at the non-complexity of Java on Twitter and elsewhere happen to be uber-clever types who, it sometimes seems, had a grasp of monads, functors and arrows before they were out of short pants).
The obvious arguments against this are (of course):
you don't have to write code like this
you don't have to pander to the average developer
Of these, it seems to me that only #2 is valid. Whether or not you write code quite as complex as scalaz, I think it's just silly to use the language (and continue to use it) with no real understanding of the type system. How else can one get the best out of the language?
There is a myth that Scala is difficult because Scala is a complex language.
This is false--by a variety of metrics, Scala is no more complex than Java. (Size of grammar, lines of code or number of classes or number of methods in the standard API, etc..)
But it is undeniably the case that Scala code can be ferociously difficult to understand. How can this be, if Scala is not a complex language?
The answer is that Scala is a powerful language. Unlike Java, which has many special constructs (like enums) that accomplish one particular thing--and requires you to learn specialized syntax that applies just to that one thing, Scala has a variety of very general constructs. By mixing and matching these constructs, one can express very complex ideas with very little code. And, unsurprisingly, if someone comes along who has not had the same complex idea and tries to figure out what you're doing with this very compact code, they may find it daunting--more daunting, even, than if they saw a couple of pages of code to do the same thing, since then at least they'd realize how much conceptual stuff there was to understand!
There is also an issue of whether things are more complex than they really need to be. For example, some of the type gymnastics present in the collections library make the collections a joy to use but perplexing to implement or extend. The goals here are not particularly complicated (e.g. subclasses should return their own types), but the methods required (higher-kinded types, implicit builders, etc.) are complex. (So complex, in fact, that Java just gives up and doesn't try, rather than doing it "properly" as in Scala. Also, in principle, there is hope that this will improve in the future, since the method can evolve to more closely match the goal.) In other cases, the goals are complex; list.filter(_<5).sorted.grouped(10).flatMap(_.tail.headOption) is a bit of a mess, but if you really want to take all numbers less than 5, and then take every 2nd number out of 10 in the remaining list, well, that's just a somewhat complicated idea, and the code pretty much says what it does if you know the basic collections operations.
Summary: Scala is not complex, but it allows you to compactly express complex ideas. Compact expression of complex ideas can be daunting.
There is a myth that Scala is non-deployable, whereas a wide range of third-party Java libraries can be deployed without a second thought.
To the extent that this myth exists, I suspect it exists among people who are not accustomed to separating a virtual machine and API from a language and compiler. If java == javac == Java API in your mind, you might get a little nervous if someone suggests using scalac instead of javac, because you see how nicely your JVM runs.
Scala ends up as JVM bytecode, plus its own custom library. There's no reason to be any more worried about deploying Scala on a small scale or as part of some other large project as there is in deploying any other library that may or may not stay compatible with whichever JVM you prefer. Granted, the Scala development team is not backed by quite as much force as the Google collections, or Apache Commons, but its got at least as much weight behind it as things like the Java Advanced Imaging project.
Myth:
def foo() = "something"
and
def bar = "something"
is the same.
It is not; you can call foo(), but bar() tries to call the apply method of StringLike with no arguments (results in an error).
Some common misconceptions related to Actors library:
Actors handle incoming messages in a parallel, in multiple threads / against a thread pool (in fact, handling messages in multiple threads is contrary to the actors concept and may lead to racing conditions - all messages are sequentially handled in one thread (thread-based actors use one thread both for mailbox processing and execution; event-based actors may share one VM thread for execution, using multi-threaded executor to schedule mailbox processing))
Uncaught exceptions don't change actor's behavior/state (in fact, all uncaught exceptions terminate the actor)
Myth: You can replace a fold with a reduce when computing something like a sum from zero.
This is a common mistake/misconception among new users of Scala, particularly those without prior functional programming experience. The following expressions are not equivalent:
seq.foldLeft(0)(_+_)
seq.reduceLeft(_+_)
The two expressions differ in how they handle the empty sequence: the fold produces a valid result (0), while the reduce throws an exception.
Myth: Pattern matching doesn't fit well with the OO paradigm.
Debunked here by Martin Odersky himself. (Also see this paper - Matching Objects with Patterns - by Odersky et al.)
Myth: this.type refers to the same type represented by this.getClass.
As an example of this misconception, one might assume that in the following code the type of v.me is B:
trait A { val me: this.type = this }
class B extends A
val v = new B
In reality, this.type refers to the type whose only instance is this. In general, x.type is the singleton type whose only instance is x. So in the example above, the type of v.me is v.type. The following session demonstrates the principle:
scala> val s = "a string"
s: java.lang.String = a string
scala> var v: s.type = s
v: s.type = a string
scala> v = "another string"
<console>:7: error: type mismatch;
found : java.lang.String("another string")
required: s.type
v = "another string"
Scala has type inference and refinement types (structural types), whereas Java does not.
The myth is busted by James Iry.
Myth: that Scala is highly scalable, without qualifying what forms of scalability.
Scala may indeed be highly scalable in terms of the ability to express higher-level denotational semantics, and this makes it a very good language for experimentation and even for scaling production at the project-level scale of top-down coordinated compositionality.
However, every referentially opaque language (i.e. allows mutable data structures), is imperative (and not declarative) and will not scale to WAN bottom-up, uncoordinated compositionality and security. In other words, imperative languages are compositional (and security) spaghetti w.r.t. uncoordinated development of modules. I realize such uncoordinated development is perhaps currently considered by most to be a "pipe dream" and thus perhaps not a high priority. And this is not to disparage the benefit to compositionality (i.e. eliminating corner cases) that higher-level semantic unification can provide, e.g. a category theory model for standard library.
There will possibly be significant cognitive dissonance for many readers, especially since there are popular misconceptions about imperative vs. declarative (i.e. mutable vs. immutable), (and eager vs. lazy,) e.g. the monadic semantic is never inherently imperative yet there is a lie that it is. Yes in Haskell the IO monad is imperative, but it being imperative has nothing to with it being a monad.
I explained this in more detail in the "Copute Tutorial" and "Purity" sections, which is either at the home page or temporarily at this link.
My point is I am very grateful Scala exists, but I want to clarify what Scala scales and what is does not. I need Scala for what it does well, i.e. for me it is the ideal platform to prototype a new declarative language, but Scala itself is not exclusively declarative and afaik referential transparency can't be enforced by the Scala compiler, other than remembering to use val everywhere.
I think my point applies to the complexity debate about Scala. I have found (so far and mostly conceptually, since so far limited in actual experience with my new language) that removing mutability and loops, while retaining diamond multiple inheritance subtyping (which Haskell doesn't have), radically simplifies the language. For example, the Unit fiction disappears, and afaics, a slew of other issues and constructs become unnecessary, e.g. non-category theory standard library, for comprehensions, etc..
With the growth of dynamically typed languages, as they give us more flexibility, there is the very likely probability that people will write programs that go beyond what the specification allows.
My thinking was influenced by this question, when I read the answer by bobince:
A question about JavaScript's slice and splice methods
The basic thought is that splice, in Javascript, is specified to be used in only certain situations, but, it can be used in others, and there is nothing that the language can do to stop it, as the language is designed to be extremely flexible.
Unless someone reads through the specification, and decides to adhere to it, I am fairly certain that there are many such violations occuring.
Is this a problem, or a natural extension of writing such flexible languages? Or should we expect tools like JSLint to help be the specification police?
I liked one answer in this question, that the implementation of python is the specification. I am curious if that is actually closer to the truth for these types of languages, that basically, if the language allows you to do something then it is in the specification.
Is there a Python language specification?
UPDATE:
After reading a couple of comments, I thought I would check the splice method in the spec and this is what I found, at the bottom of pg 104, http://www.mozilla.org/js/language/E262-3.pdf, so it appears that I can use splice on the array of children without violating the spec. I just don't want people to get bogged down in my example, but hopefully to consider the question.
The splice function is intentionally generic; it does not require that its this value be an Array object.
Therefore it can be transferred to other kinds of objects for use as a method. Whether the splice function
can be applied successfully to a host object is implementation-dependent.
UPDATE 2:
I am not interested in this being about javascript, but language flexibility and specs. For example, I expect that the Java spec specifies you can't put code into an interface, but using AspectJ I do that frequently. This is probably a violation, but the writers didn't predict AOP and the tool was flexible enough to be bent for this use, just as the JVM is also flexible enough for Scala and Clojure.
Whether a language is statically or dynamically typed is really a tiny part of the issue here: a statically typed one may make it marginally easier for code to enforce its specs, but marginally is the key word here. Only "design by contract" -- a language letting you explicitly state preconditions, postconditions and invariants, and enforcing them -- can help ward you against users of your libraries empirically discovering what exactly the library will let them get away with, and taking advantage of those discoveries to go beyond your design intentions (possibly constraining your future freedom in changing the design or its implementation). And "design by contract" is not supported in mainstream languages -- Eiffel is the closest to that, and few would call it "mainstream" nowadays -- presumably because its costs (mostly, inevitably, at runtime) don't appear to be justified by its advantages. "Argument x must be a prime number", "method A must have been previously called before method B can be called", "method C cannot be called any more once method D has been called", and so on -- the typical kinds of constraints you'd like to state (and have enforced implicitly, without having to spend substantial programming time and energy checking for them yourself) just don't lend themselves well to be framed in the context of what little a statically typed language's compiler can enforce.
I think that this sort of flexibility is an advantage as long as your methods are designed around well defined interfaces rather than some artificial external "type" metadata. Most of the array functions only expect an object with a length property. The fact that they can all be applied generically to lots of different kinds of objects is a boon for code reuse.
The goal of any high level language design should be to reduce the amount of code that needs to be written in order to get stuff done- without harming readability too much. The more code that has to be written, the more bugs get introduced. Restrictive type systems can be, (if not well designed), a pervasive lie at worst, a premature optimisation at best. I don't think overly restrictive type systems aid in writing correct programs. The reason being that the type is merely an assertion, not necessarily based on evidence.
By contrast, the array methods examine their input values to determine whether they have what they need to perform their function. This is duck typing, and I believe that this is more scientific and "correct", and it results in more reusable code, which is what you want. You don't want a method rejecting your inputs because they don't have their papers in order. That's communism.
I do not think your question really has much to do with dynamic vs. static typing. Really, I can see two cases: on one hand, there are things like Duff's device that martin clayton mentioned; that usage is extremely surprising the first time you see it, but it is explicitly allowed by the semantics of the language. If there is a standard, that kind of idiom may appear in later editions of the standard as a specific example. There is nothing wrong with these; in fact, they can (unless overused) be a great productivity boost.
The other case is that of programming to the implementation. Such a case would be an actual abuse, coming from either ignorance of a standard, or lack of a standard, or having a single implementation, or multiple implementations that have varying semantics. The problem is that code written in this way is at best non-portable between implementations and at worst limits the future development of the language, for fear that adding an optimization or feature would break a major application.
It seems to me that the original question is a bit of a non-sequitor. If the specification explicitly allows a particular behavior (as MUST, MAY, SHALL or SHOULD) then anything compiler/interpreter that allows/implements the behavior is, by definition, compliant with the language. This would seem to be the situation proposed by the OP in the comments section - the JavaScript specification supposedly* says that the function in question MAY be used in different situations, and thus it is explicitly allowed.
If, on the other hand, a compiler/interpreter implements or allows behavior that is expressly forbidden by a specification, then the compiler/interpreter is, by definition, operating outside the specification.
There is yet a third scenario, and an associated, well defined, term for those situations where the specification does not define a behavior: undefined. If the specification does not actually specify a behavior given a particular situation, then the behavior is undefined, and may be handled either intentionally or unintentionally by the compiler/interpreter. It is then the responsibility of the developer to realize that the behavior is not part of the specification, and, should s/he choose to leverage the behavior, the developer's application is thereby dependent upon the particular implementation. The interpreter/compiler providing that implementation is under no obligation to maintain the officially undefined behavior beyond backwards compatibility and whatever commitments the producer may make. Furthermore, a later iteration of the language specification may define the previously undefined behavior, making the compiler/interpreter either (a) non-compliant with the new iteration, or (b) come out with a new patch/version to become compliant, thereby breaking older versions.
* "supposedly" because I have not seen the spec, myself. I go by the statements made, above.