What is a head in Coq head normal form? - coq

I am having trouble understanding Coq/CIC head normal form. More specifically, I don't understand what is a head. The reference manual (8.5p1) says that
Any term can be written as:
But the above definition is in the negative sense: it requires t0 not to be an application, but didn't say what can t0 be. In fact, as far as I can remember, the only thing that can be written as t0 t1 t2 ... is an application of a function or constructor t0.
Can someone help spell out what exactly can be the head here?

Yes, t0 t1 ... tn is an application, but the manual is talking about the form of t0 standing by itself.
In the manual, t or ti for some i usually denotes a term, so t0 can be any term, that is not an application, e.g. a λ-abstraction, etc. -- see the list in sect. 4.1.2 of the manual and the sect. 1.2.1 on syntax.
Also, this Wikipedia page might be of some help.

Related

Constant in function of other constants - TI-Nspire CAS

I have these three expressions:
E1=(c1-c2)/(c1-c5)
E2=(c3-c4)/(c3-c7)
Es=(c1-c4)/(c1-c5)
How can I express Es in function of E1 and E2?
Es=E1+E2*(c3-c7)/(c1-c5)
Thank you for your time.
This is same type of problem you had in previous question and you were right - TI basic is not made for such things. There is no automatic way to do what you want but in each individual case you can do the same kind of gymnastics as previously. In this case, you have system of 2 equations with many variables, so you can eliminate 2 of them. You picked (for some reason) c2 and c4 to eliminate, so system must be solved for those 2 variables and solution included in third equation:
i1:=solve({E1=(c1-c2)/(c1-c5),E2=(c3-c4)/(c3-c7)},{c2,c4})
Es:=(c1-c4)/(c1-c5)|i1
result>
Es:=(c1+c3*(e2-1)-c7*e2)/(c1-c5)
Also, in this case E1 and E2 expressions are independent, so they can be solved separately, and Es expr. doesnt contain c2, you can just>
i1:=solve(E2=(c3-c4)/(c3-c7),c4)
Es:=(c1-c4)/(c1-c5)|i1
..with the same result. This is not what you wanted, but your solution is not correct anyway.

how to use forall X in answer set programming (dlv) (answer set prolog)

I have the following facts in dlv, knows (X,Y) means X knows Y.
knows(adam, dan).
knows(adam,alice).
knows(adam,peter).
knows(adam,eva).
knows(dan, adam).
knows(dan,alice).
knows(dan,peter).
knows(eva, alice).
knows(eva,peter).
knows(alice, peter).
knows(peter, alice).
I have defined the following predicates,
person(X) :- knows(X, _).
This will give all the persons from the facts. I am trying to find a predicate popular(X). that will give the popular person. It is defined such that if all persons knows X then X is popular. The answer for the above list of facts is alice and peter. I defined it as below,
popular(X):-person(X),knows(_,X).
X is popular if its a person and everyone knows X. But I am getting all persons as the result when I run it. Where am I making a mistake?
As per the comment string on the original post, you have defined popular to be "a person that is known by someone". Since - in your knowledge base - everyone is known by someone, everyone is popular.
Assuming "a popular person is one whom everyone knows but the popular person knows only other popular persons"; if we want to know if X is popular:
We either need to count all the people that know X and then compare that to the number of people;
Or we need to verify that it is never the case that someone doesn't know X.
I'll focus on the second way to do this, using forall. Take sometime and run some tests on your own to understand how that works. Here's an example of what you might do:
popular(X): - person(X),
forall(
( person(Y),
X \= Y
),
knows(Y,X)
).
If you run this, you get Alice and Peter as answers.
But if we include the other condition:
popular(X): - person(X),
forall(
( person(Y),
X \= Y
),
knows(Y,X)
),
forall(
knows(X,Z),
popular(Z)
).
That last line says X needs to know people that are popular exclusively... and now, if you run this, you're most likely going to get a 'out of local stack' - it's a bottomless recursive definition.
You always need to check if someone is popular to know if someone is popular to know if someone is popular... Try to think about the problem and why that is. Is there a way to check if someone is popular without needing to check if someone else is popular? What if someone 'knows' themselves? What if two people know each other? This might take a slightly more complex approach to solve.
By the way, notice that your definition of person returns multiple people - everyone is a person for every person they know. Besides making every check take a lot longer (since there are more 'people' to check), this might be a problem if you decide to go with the firs approach (the counting one).
Wouldn't it make sense to define explicitly who are the people and then define 'knowing' as a relation between people?
person('Alice').
person('Bob').
knows('Alice','Bob').
As said in lurker's comment (with slight modification and emphasis by me), the reason you get all persons as a result is
You have defined person as: X is a person if X knows someone. And you've defined popular as: X is popular if X is a person and someone knows X.
But you wanted to define: X is popular if X is a person and everyone knows X.
The following is an ASP solution for clingo 4. DLV might have slight differences in syntax.
% Project
person(P) :- knows(P, _).
% Separate helper predicate knows2/2.
% Not needed if polluting knows/2 with knows(X, X) is OK.
knows2(O, P) :- knows(O, P).
knows2(P, P) :- person(P).
% Everybody knows a popular person.
% When there is a person O that doesn't know P, #false is active.
% I.e. all rule instantiations where some O doesn't know P are discarded.
popular(P) :- person(P), #false : person(O), not knows2(O, P).
% Popular person knows only other popular persons.
% Redundant at this point, since the above rule already results
% in correct answer without further integrity constraints.
:- person(P), person(O), popular(P), not popular(O), knows(P, O).
#show popular/1.

PLT Redex: parameterizing a language definition

This is a problem that's been nagging at me for some time, and I wonder if anyone here can help.
I have a PLT Redex model of a language called lambdaLVar that is more or less a garden-variety untyped lambda calculus, but extended with a store containing "lattice variables", or LVars. An LVar is a variable whose value can only increase over time, where the meaning of "increase" is given by a partially ordered set (aka a lattice) that the user of the language specifies. Therefore lambdaLVar is really a family of languages -- instantiate it with one lattice and you get one language; with a different lattice, and you get another. You can take a look at the code here; the important stuff is in lambdaLVar.rkt.
In the on-paper definition of lambdaLVar, the language definition is parameterized by that user-specified lattice. For a long time, I've wanted to do the same kind of parameterization in the Redex model, but so far, I haven't been able to figure out how. Part of the trouble is that the grammar of the language depends on how the user instantiates the lattice: elements of the lattice become terminals in the grammar. I don't know how to express a grammar in Redex that is abstract over the lattice.
In the meantime, I tried to make lambdaLVar.rkt as modular as I could. The language defined in that file is specialized to a particular lattice: natural numbers with max as the least-upper-bound (lub) operation. (Or, equivalently, natural numbers ordered by <=. It's a very boring lattice.) The only parts of the code that are specific to that lattice are the line (define lub-op max) near the top, and natural appearing in the grammar. (There's a lub metafunction that is defined in terms of the user-specified lub-op function. The latter is just a Racket function, so lub has to escape out to Racket to call lub-op.)
Barring the ability to actually specify lambdaLVar in a way that is abstract over the choice of lattice, it seems like I ought to be able to write a version of lambdaLVar with the most bare-bones of lattices -- just Bot and Top elements, where Bot <= Top -- and then use define-extended-language to add more stuff. For instance, I could define a language called lambdaLVar-nats that is specialized to the naturals lattice I described:
;; Grammar for elements of a lattice of natural numbers.
(define-extended-language lambdaLVar-nats
lambdaLVar
(StoreVal .... ;; Extend the original language
natural))
;; All we have to specify is the lub operation; leq is implicitly <=
(define-metafunction/extension lub lambdaLVar-nats
lub-nats : d d -> d
[(lub-nats d_1 d_2) ,(max (term d_1) (term d_2))])
Then, to replace the two reduction relations slow-rr and fast-rr that I had for lambdaLVar, I could define a couple of wrappers:
(define nats-slow-rr
(extend-reduction-relation slow-rr
lambdaLVar-nats))
(define nats-fast-rr
(extend-reduction-relation fast-rr
lambdaLVar-nats))
My understanding from the documentation on extend-reduction-relation is that it should reinterpret the rules in slow-rr and fast-rr, but using lambdaLVar-nats. Putting all this together, I tried running the test suite that I had with one of the new, extended reduction relations:
> (program-test-suite nats-slow-rr)
The first thing I get is a contract violation complaint: small-step-base: input (((l 3)) new) at position 1 does not match its contract. The contract line of small-step-base is just #:contract (small-step-base Config Config), where Config is a grammar nonterminal that has a new meaning if reinterpreted under lambdaLVar-nats than it did under lambdaLVar, because of the specific lattice stuff. As an experiment, I got rid of the contracts onsmall-step-base and small-step-slow.
I was then able to actually run my 19 test programs, but 10 of them fail. Perhaps unsurprisingly, all the ones that fail are programs that use natural-number-valued LVars in some way. (The rest are "pure" programs that don't interact with the store of LVars at all.) So, the tests that fail are exactly the ones that use the extended grammar.
So I kept following the rabbit hole, and it seems like Redex wants me to extend all of the existing judgment forms and metafunctions to be associated with lambdaLVar-nats rather than lambdaLVar. That makes sense, and it seems to work OK for judgment forms as far as I can tell, but with metafunctions I get into trouble: I want the new metafunction to overload the old one of the same name (because existing judgment forms are using it) and there doesn't seem to be a way to do that. If I have to rename the metafunctions, it defeats the purpose, because I'll have to write whole new judgment forms anyway. I suppose that what I want is a sort of late binding of metafunction calls!
My question in a nutshell: Is there any way in Redex to parameterize the definition of a language in the way I want, or to extend the definition of a language in a way that will do what I want? Will I end up just having to write Redex-generating macros?
Thanks for reading!
I asked the Racket users mailing list; the thread begins here. To summarize the resulting discussion: In Redex as it stands today, the answer is no, there is no way to parameterize a language definition in the way I want. However, it should be possible in a future version of Redex with a module system, which is in the works right now.
It also doesn't work to try to use Redex's existing extension forms (define-extended-language, extend-reduction-relation, and so on) in the way I tried to do here, because -- as I discovered -- the original metafunctions do not get transitively reinterpreted to use the extended languages. But a module system would apparently help with this, too, because it would allow you to package up metafunctions, judgment-forms, and reduction relations together and simultaneously extend them (see the discussion here).
So, for now, the answer is, indeed, to write a Redex-generating macro. Something like this works:
(define-syntax-rule (define-lambdaLVar-language name lub-op lattice-values ...)
(begin
;; Entire original Redex model goes here, with `natural` replaced with
;; `lattice-values ...`, and instances of `...` replaced with `(... ...)`
))
And then you can instantiate particular lattices with, e.g.,:
(define-lambdaLVar-language lambdaLVar-nat max natural)
I hope Redex does get modules soon, but in the meantime, this seems to work well.

could someone explain the connection between type covariance/contravariance and category theory?

I am just starting to read about category theory, and would very much appreciate it if someone could explain the connection between CS contravariance/covariance and category theory. What would some example categories be (i.e. what are their objects/morphisms?)? Thanks in advance?
A contravariant functor from $C$ to $D$ is the exact same thing as a normal (i.e. covariant) functor from $C$ to $D^{op}$, where $D^{op}$ is the opposite category of $D$. So it's probably best to understand opposite categories first -- then you'll automatically understand contravariant functors!
Contravariant functors don't come up all that often in CS, although I can think of two exceptions:
You may have heard of contravariance in the context of subtyping. Although this is technically the same term, the connection is really, really weak. In object oriented programming, the classes form a partial order; every partial order is a category with "binary hom-sets" -- given any two objects $A$ and $B$, there is exactly one morphism $A\to B$ iff $A\leq B$ (note the direction; this slightly-confusing orientation is the standard for reasons I won't explain here) and no morphisms otherwise.
Parameterized types like, say, Scala's PartialFunction[-A,Unit] are functors from this simple category to itself... we usually focus on what they do to objects: given a class X, PartialFunction[X,Unit] is also a class. But functors preserve morphisms too; in this case if we had a subclass Dog of Animal, we would have a morphism Dog$\to$Animal, and the functor would preserve this morphism, giving us a morphism PartialFunction[Animal,Unit]$\to$PartialFunction[Dog,Unit], telling us that PartialFunction[Animal,Unit] is a subclass of PartialFunction[Dog,Unit]. If you think about that, it makes sense: suppose you have a situation where you need a function that works on Dogs. A function that works on all Animals would certainly work there!
That said, using full-on category theory to talk about partially ordered sets is big-time overkill.
Less common, but actually uses the category theory: consider the category Types(Hask) whose objects are the types of the Haskell programming language and where a morphism $\tau_1\to\tau_2$ is a function of type $\tau_1$->$\tau_2$. There is also a category Judgments(Hask) whose objects are lists of typing judgments $\tau_1\vdash\tau_2$ and whose morphisms are proofs of all the judgments on one list using the judgments on the other list as hypotheses. There is a functor from Types(Hask) to Judgments(Hask) which takes a Types(Hask)-morphism $f:A\to B$ to the proof
B |- Int
----------
......
----------
A |- Int
which is a morphism $(B\vdash Int)\to(A\vdash Int)$ -- notice the change of direction. Basically what this is saying is that if you've got a function that turns A's into B'a, and an expression of type Int with a free variable x of type B, then you can wrap it with "let x = f y in ..." and arrive at an expression still of type Int but whose only free variable is of type $A$, not $B$.
There are very nice videos about contravariance/covariance in Going deep channel in Microsoft Channel9. You can start here:
E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 1 of 3
E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 2 of 3
E2E: Brian Beckman and Erik Meijer - Co/Contravariance in Physics and Programming, 3 of n
E2E: Whiteboard Jam Session with Brian Beckman and Greg Meredith - Monads and Coordinate Systems

Is the Scala 2.8 collections library a case of "the longest suicide note in history"? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I have just started to look at the Scala collections library re-implementation which is coming in the imminent 2.8 release. Those familiar with the library from 2.7 will notice that the library, from a usage perspective, has changed little. For example...
> List("Paris", "London").map(_.length)
res0: List[Int] List(5, 6)
...would work in either versions. The library is eminently useable: in fact it's fantastic. However, those previously unfamiliar with Scala and poking around to get a feel for the language now have to make sense of method signatures like:
def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That
For such simple functionality, this is a daunting signature and one which I find myself struggling to understand. Not that I think Scala was ever likely to be the next Java (or /C/C++/C#) - I don't believe its creators were aiming it at that market - but I think it is/was certainly feasible for Scala to become the next Ruby or Python (i.e. to gain a significant commercial user-base)
Is this going to put people off coming to Scala?
Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off?
Was the library re-design a sensible idea?
If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens?
Steve Yegge once attacked Scala (mistakenly in my opinion) for what he saw as its overcomplicated type-system. I worry that someone is going to have a field day spreading FUD with this API (similarly to how Josh Bloch scared the JCP out of adding closures to Java).
Note - I should be clear that, whilst I believe that Joshua Bloch was influential in the rejection of the BGGA closures proposal, I don't ascribe this to anything other than his honestly-held beliefs that the proposal represented a mistake.
Despite whatever my wife and coworkers keep telling me, I don't think I'm an idiot: I have a good degree in mathematics from the University of Oxford, and I've been programming commercially for almost 12 years and in Scala for about a year (also commercially).
Note the inflammatory subject title is a quotation made about the manifesto of a UK political party in the early 1980s. This question is subjective but it is a genuine question, I've made it CW and I'd like some opinions on the matter.
I hope it's not a "suicide note", but I can see your point. You hit on what is at the same time both a strength and a problem of Scala: its extensibility. This lets us implement most major functionality in libraries. In some other languages, sequences with something like map or collect would be built in, and nobody has to see all the hoops the compiler has to go through to make them work smoothly. In Scala, it's all in a library, and therefore out in the open.
In fact the functionality of map that's supported by its complicated type is pretty advanced. Consider this:
scala> import collection.immutable.BitSet
import collection.immutable.BitSet
scala> val bits = BitSet(1, 2, 3)
bits: scala.collection.immutable.BitSet = BitSet(1, 2, 3)
scala> val shifted = bits map { _ + 1 }
shifted: scala.collection.immutable.BitSet = BitSet(2, 3, 4)
scala> val displayed = bits map { _.toString + "!" }
displayed: scala.collection.immutable.Set[java.lang.String] = Set(1!, 2!, 3!)
See how you always get the best possible type? If you map Ints to Ints you get again a BitSet, but if you map Ints to Strings, you get a general Set. Both the static type and the runtime representation of map's result depend on the result type of the function that's passed to it. And this works even if the set is empty, so the function is never applied! As far as I know there is no other collection framework with an equivalent functionality. Yet from a user perspective this is how things are supposed to work.
The problem we have is that all the clever technology that makes this happen leaks into the type signatures which become large and scary. But maybe a user should not be shown by default the full type signature of map? How about if she looked up map in BitSet she got:
map(f: Int => Int): BitSet (click here for more general type)
The docs would not lie in that case, because from a user perspective indeed map has the type (Int => Int) => BitSet. But map also has a more general type which can be inspected by clicking on another link.
We have not yet implemented functionality like this in our tools. But I believe we need to do this, to avoid scaring people off and to give more useful info. With tools like that, hopefully smart frameworks and libraries will not become suicide notes.
I do not have a PhD, nor any other kind of degree neither in CS nor math nor indeed any other field. I have no prior experience with Scala nor any other similar language. I have no experience with even remotely comparable type systems. In fact, the only language that I have more than just a superficial knowledge of which even has a type system is Pascal, not exactly known for its sophisticated type system. (Although it does have range types, which AFAIK pretty much no other language has, but that isn't really relevant here.) The other three languages I know are BASIC, Smalltalk and Ruby, none of which even have a type system.
And yet, I have no trouble at all understanding the signature of the map function you posted. It looks to me like pretty much the same signature that map has in every other language I have ever seen. The difference is that this version is more generic. It looks more like a C++ STL thing than, say, Haskell. In particular, it abstracts away from the concrete collection type by only requiring that the argument is IterableLike, and also abstracts away from the concrete return type by only requiring that an implicit conversion function exists which can build something out of that collection of result values. Yes, that is quite complex, but it really is only an expression of the general paradigm of generic programming: do not assume anything that you don't actually have to.
In this case, map does not actually need the collection to be a list, or being ordered or being sortable or anything like that. The only thing that map cares about is that it can get access to all elements of the collection, one after the other, but in no particular order. And it does not need to know what the resulting collection is, it only needs to know how to build it. So, that is what its type signature requires.
So, instead of
map :: (a → b) → [a] → [b]
which is the traditional type signature for map, it is generalized to not require a concrete List but rather just an IterableLike data structure
map :: (IterableLike i, IterableLike j) ⇒ (a → b) → i → j
which is then further generalized by only requiring that a function exists that can convert the result to whatever data structure the user wants:
map :: IterableLike i ⇒ (a → b) → i → ([b] → c) → c
I admit that the syntax is a bit clunkier, but the semantics are the same. Basically, it starts from
def map[B](f: (A) ⇒ B): List[B]
which is the traditional signature for map. (Note how due to the object-oriented nature of Scala, the input list parameter vanishes, because it is now the implicit receiver parameter that every method in a single-dispatch OO system has.) Then it generalized from a concrete List to a more general IterableLike
def map[B](f: (A) ⇒ B): IterableLike[B]
Now, it replaces the IterableLike result collection with a function that produces, well, really just about anything.
def map[B, That](f: A ⇒ B)(implicit bf: CanBuildFrom[Repr, B, That]): That
Which I really believe is not that hard to understand. There's really only a couple of intellectual tools you need:
You need to know (roughly) what map is. If you gave only the type signature without the name of the method, I admit, it would be a lot harder to figure out what is going on. But since you already know what map is supposed to do, and you know what its type signature is supposed to be, you can quickly scan the signature and focus on the anomalies, like "why does this map take two functions as arguments, not one?"
You need to be able to actually read the type signature. But even if you have never seen Scala before, this should be quite easy, since it really is just a mixture of type syntaxes you already know from other languages: VB.NET uses square brackets for parametric polymorphism, and using an arrow to denote the return type and a colon to separate name and type, is actually the norm.
You need to know roughly what generic programming is about. (Which isn't that hard to figure out, since it's basically all spelled out in the name: it's literally just programming in a generic fashion).
None of these three should give any professional or even hobbyist programmer a serious headache. map has been a standard function in pretty much every language designed in the last 50 years, the fact that different languages have different syntax should be obvious to anyone who has designed a website with HTML and CSS and you can't subscribe to an even remotely programming related mailinglist without some annoying C++ fanboy from the church of St. Stepanov explaining the virtues of generic programming.
Yes, Scala is complex. Yes, Scala has one of the most sophisticated type systems known to man, rivaling and even surpassing languages like Haskell, Miranda, Clean or Cyclone. But if complexity were an argument against success of a programming language, C++ would have died long ago and we would all be writing Scheme. There are lots of reasons why Scala will very likely not be successful, but the fact that programmers can't be bothered to turn on their brains before sitting down in front of the keyboard is probably not going to be the main one.
Same thing in C++:
template <template <class, class> class C,
class T,
class A,
class T_return,
class T_arg
>
C<T_return, typename A::rebind<T_return>::other>
map(C<T, A> &c,T_return(*func)(T_arg) )
{
C<T_return, typename A::rebind<T_return>::other> res;
for ( C<T,A>::iterator it=c.begin() ; it != c.end(); it++ ){
res.push_back(func(*it));
}
return res;
}
Well, I can understand your pain, but, quite frankly, people like you and I -- or pretty much any regular Stack Overflow user -- are not the rule.
What I mean by that is that... most programmers won't care about that type signature, because they'll never see them! They don't read documentation.
As long as they saw some example of how the code works, and the code doesn't fail them in producing the result they expect, they won't ever look at the documentation. When that fails, they'll look at the documentation and expect to see usage examples at the top.
With these things in mind, I think that:
Anyone (as in, most people) who ever comes across that type signature will mock Scala to no end if they are pre-disposed against it, and will consider it a symbol of Scala's power if they like Scala.
If the documentation isn't enhanced to provide usage examples and explain clearly what a method is for and how to use it, it can detract from Scala adoption a bit.
In the long run, it won't matter. That Scala can do stuff like that will make libraries written for Scala much more powerful and safer to use. These libraries and frameworks will attract programmers atracted to powerful tools.
Programmers who like simplicity and directness will continue to use PHP, or similar languages.
Alas, Java programmers are much into power tools, so, in answering that, I have just revised my expectation of mainstream Scala adoption. I have no doubt at all that Scala will become a mainstream language. Not C-mainstream, but perhaps Perl-mainstream or PHP-mainstream.
Speaking of Java, did you ever replace the class loader? Have you ever looked into what that involves? Java can be scary, if you look at the places framework writers do. It's just that most people don't. The same thing applies to Scala, IMHO, but early adopters have a tendency to look under each rock they encounter, to see if there's something hiding there.
Is this going to put people off coming to Scala?
Yes, but it will also prevent people from being put off. I've considered the lack of collections that use higher-kinded types to be a major weakness ever since Scala gained support for higher-kinded types. It make the API docs more complicated, but it really makes usage more natural.
Is this going to give scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off?
Some probably will. I don't think Scala is accessible to many "professional" developers, partially due to the complexity of Scala and partly due to the unwillingness of many developers to learn. The CTOs who employ such developers will rightly be scared off.
Was the library re-design a sensible idea?
Absolutely. It makes collections fit much better with the rest of the language and the type system, even if it still has some rough edges.
If you're using scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens?
I'm not using it commercially. I'll probably wait until at least a couple revs into the 2.8.x series before even trying to introduce it so that the bugs can be flushed out. I'll also wait to see how much success EPFL has in improving its development a release processes. What I'm seeing looks hopeful, but I work for a conservative company.
One the more general topic of "is Scala too complicated for mainstream developers?"...
Most developers, mainstream or otherwise, are maintaining or extending existing systems. This means that most of what they use is dictated by decisions made long ago. There are still plenty of people writing COBOL.
Tomorrow's mainstream developer will work maintaining and extending the applications that are being built today. Many of these applications are not being built by mainstream developers. Tomorrow's mainstream developers will use the language that is being used by today's most successful developers of new applications.
One way that the Scala community can help ease the fear of programmers new to Scala is to focus on practice and to teach by example--a lot of examples that start small and grow gradually larger. Here are a few sites that take this approach:
Daily Scala
Learning Scala in small bites
Simply Scala
After spending some time on these sites, one quickly realizes that Scala and its libraries, though perhaps difficult to design and implement, are not so difficult to use, especially in the common cases.
I have an undergraduate degree from a cheap "mass market" US university, so I'd say I fall into the middle of the user intelligence (or at least education) scale :) I've been dabbling with Scala for just a few months and have worked on two or three non-trivial apps.
Especially now that IntelliJ has released their fine IDE with what IMHO is currently the best Scala plugin, Scala development is relatively painless:
I find I can use Scala as a "Java without semicolons," i.e. I write similar-looking code to what I'd do in Java, and benefit a little from syntactic brevity such as that gained by type inference. Exception handling, when I do it at all, is more convenient. Class definition is much less verbose without the getter/setter boilerplate.
Once in a while I manage to write a single line to accomplish the equivalent of multiple lines of Java. Where applicable, chains of functional methods like map, fold, collect, filter etc. are fun to compose and elegant to behold.
Only rarely do I find myself benefitting from Scala's more high-powered features: Closures and partial (or curried) functions, pattern matching... that kinda thing.
As a newbie, I continue to struggle with the terse and idiomatic syntax. Method calls without parameters don't need parentheses except where they do; cases in the match statement need a fat arrow ( => ), but there are also places where you need a thin arrow ( -> ). Many methods have short but rather cryptic names like /: or \: - I can get my stuff done if I flip enough manual pages, but some of my code ends up looking like Perl or line noise. Ironically, one of the most popular bits of syntactic shorthand is missing in action: I keep getting bitten by the fact that Int doesn't define a ++ method.
This is just my opinion: I feel like Scala has the power of C++ combined with the complexity and readability of C++. The syntactic complexity of the language also makes the API documentation hard to read.
Scala is very well thought out and brilliant in many respects. I suspect many an academic would love to program in it. However, it's also full of cleverness and gotchas, it has a much higher learning curve than Java and is harder to read. If I scan the fora and see how many developers are still struggling with the finer points of Java, I cannot conceive of Scala ever becoming a mainstream language. No company will be able to justify sending its developers on a 3 week Scala course when formerly they only needed a 1 week Java course.
I think primary problem with that method is that the (implicit bf : CanBuildFrom[Repr, B, That]) goes without any explanation. Even though I know what implicit arguments are there's nothing indicating how this affects the call. Chasing through the scaladoc only leaves me more confused (few of the classes related to CanBuildFrom even have documentation).
I think a simple "there must be an implicit object in scope for bf that provides a builder for objects of type B into the return type That" would help somewhat, but it's kind of a heady concept when all you really want to do is map A's to B's. In fact, I'm not sure that's right, because I don't know what the type Repr means, and the documentation for Traversable certainly gives no clue at all.
So, I'm left with two options, neither of them pleasant:
Assume it will just work how the old map works and how map works in most other languages
Dig into the source code some more
I get that Scala is essentially exposing the guts of how these things work and that ultimately this is provide a way to do what oxbow_lakes is describing. But it's a distraction in the signature.
I'm a Scala beginner and I honestly don't see a problem with that type signature. The parameter is the function to map and the implicit parameter the builder to return the correct collection. Clear and readable.
The whole thing's quite elegant, actually. The builder type parameters let the compiler choose the correct return type while the implicit parameter mechanism hides this extra parameter from the class user. I tried this:
Map(1 -> "a", 2 -> "b").map((t) => (t._2) -> (t._1)) // returns Map("a" -> 1, "b" -> 2)
Map(1 -> "a", 2 -> "b").map((t) => t._2) // returns List("a", "b")
That's polymorphism done right.
Now, granted, it's not a mainstream paradigm and it will scare away many. But, it will also attract many who value its expressiveness and elegance.
Unfortunately the signature for map that you gave is an incorrect one for map and there is indeed legitimate criticism.
The first criticism is that by subverting the signature for map, we have something that is more general. It is a common error to believe that this is a virtue by default. It isn't. The map function is very well defined as a covariant functor Fx -> (x -> y) -> Fy with adherence to the two laws of composition and identity. Anything else attributed to "map" is a travesty.
The given signature is something else, but it is not map. What I suspect it is trying to be is a specialised and slightly altered version of the "traverse" signature from the paper, The Essence of the Iterator Pattern. Here is its signature:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
I shall convert it to Scala:
def traverse[A, B](f: A => F[B], a: T[A])(implicit t: Traversable[T], ap: Applicative[F]): F[T[B]
Of course it fails -- it is not general enough! Also, it is slightly different (note that you can get map by running traverse through the Identity functor). However, I suspect that if the library writers were more aware of library generalisations that are well documented (Applicative Programming with Effects precedes the aforementioned), then we wouldn't see this error.
Second, the map function is a special-case in Scala because of its use in for-comprehensions. This unfortunately means that a library designer who is better equipped cannot ignore this error without also sacrificing the syntactic sugar of comprehensions. In other words, if the Scala library designers were to destroy a method, then this is easily ignored, but please not map!
I hope someone speaks up about it, because as it is, it will become harder to workaround the errors that Scala insists on making, apparently for reasons that I have strong objections to. That is, the solution to "the irresponsible objections from the average programmer (i.e. too hard!)" is not "appease them to make it easier for them" but instead, provide pointers and assistance to become better programmers. Myself and Scala's objectives are in contention on this issue, but back to your point.
You were probably making your point, predicting specific responses from "the average programmer." That is, the people who will claim "but it is too complicated!" or some such. These are the Yegges or Blochs that you refer to. My response to these people of the anti-intellectualism/pragmatism movement is quite harsh and I'm already anticipating a barrage of responses, so I will omit it.
I truly hope the Scala libraries improve, or at least, the errors can be safely tucked away in a corner. Java is a language where "trying to do anything useful" is so incredibly costly, that it is often not worth it because the overwhelming amount of errors simply cannot be avoided. I implore Scala to not go down the same path.
I totally agree with both the question and Martin's answer :). Even in Java, reading javadoc with generics is much harder than it should be due to the extra noise. This is compounded in Scala where implicit parameters are used as in the questions's example code (while the implicits do very useful collection-morphing stuff).
I don't think its a problem with the language per se - I think its more a tooling issue. And while I agree with what Jörg W Mittag says, I think looking at scaladoc (or the documentation of a type in your IDE) - it should require as little brain power as possible to grok what a method is, what it takes and returns. There shouldn't be a need to hack up a bit of algebra on a bit of paper to get it :)
For sure IDEs need a nice way to show all the methods for any variable/expression/type (which as with Martin's example can have all the generics inlined so its nice and easy to grok). I like Martin's idea of hiding the implicits by default too.
To take the example in scaladoc...
def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Repr, B, That]): That
When looking at this in scaladoc I'd like the generic block [B, That] to be hidden by default as well as the implicit parameter (maybe they show if you hover a little icon with the mouse) - as its extra stuff to grok reading it which usually isn't that relevant. e.g. imagine if this looked like...
def map(f: A => B): That
nice and clear and obvious what it does. You might wonder what 'That' is, if you mouse over or click it it could expand the [B, That] text highlighting the 'That' for example.
Maybe a little icon could be used for the [] declaration and (implicit...) block so its clear there are little bits of the statement collapsed? Its hard to use a token for it, but I'll use a . for now...
def map.(f: A => B).: That
So by default the 'noise' of the type system is hidden from the main 80% of what folks need to look at - the method name, its parameter types and its return type in nice simple concise way - with little expandable links to the detail if you really care that much.
Mostly folks are reading scaladoc to find out what methods they can call on a type and what parameters they can pass. We're kinda overloading users with way too much detail right how IMHO.
Here's another example...
def orElse[A1 <: A, B1 >: B](that: PartialFunction[A1, B1]): PartialFunction[A1, B1]
Now if we hid the generics declaration its easier to read
def orElse(that: PartialFunction[A1, B1]): PartialFunction[A1, B1]
Then if folks hover over, say, A1 we could show the declaration of A1 being A1 <: A. Covariant and contravariant types in generics add lots of noise too which can be rendered in a much easier to grok way to users I think.
I don't know how to break it to you, but I have a PhD from Cambridge, and I'm using 2.8 just fine.
More seriously, I hardly spent any time with 2.7 (it won't inter-op with a Java library I am using) and started using Scala just over a month ago. I have some experience with Haskell (not much), but just ignored the stuff you're worried about and looked for methods that matched my experience with Java (which I use for a living).
So: I am a "new user" and I wasn't put off - the fact that it works like Java gave me enough confidence to ignore the bits I didn't understand.
(However, the reason I was looking at Scala was partly to see whether to push it at work, and I am not going to do so yet. Making the documentation less intimidating would certainly help, but what surprised me is how much it is still changing and being developed (to be fair what surprised me most was how awesome it is, but the changes came a close second). So I guess what I am saying is that I'd rather prefer the limited resources were put into getting it into a final state - I don't think they were expecting to be this popular this soon.)
Don't know Scala at all, however a few weeks ago I could not read Clojure. Now I can read most of it, but can not write anything yet beyond the most simplistic examples. I suspect Scala is no different. You need a good book or course depending on how you learn. Just reading the map declaration above, I got maybe 1/3 of it.
I believe the bigger problems are not the syntax of these languages, but adopting and internalizing the paradigms that make them usable in everyday production code. For me Java was not a huge leap from C++, which was not a huge leap from C, which was not a leap at all from Pascal, nor Basic etc... But coding in a functional language like Clojure is a huge leap (for me anyway). I guess in Scala you can code in Java style or Scala style. But in Clojure you will create quite the mess trying to keep your imperative habits from Java.
Scala has a lot of crazy features (particularly where implicit parameters are concerned) that look very complicated and academic, but are designed to make things easy to use. The most useful ones get syntactic sugar (like [A <% B] which means that an object of type A has an implicit conversion to an object of type B) and a well-documented explanation of what they do. But most of the time, as a client of these libraries you can ignore the implicit parameters and trust them to do the right thing.
Is this going to put people off coming to Scala?
I don't think it is the main factor that will affect how popular Scala will become, because Scala has a lot of power and its syntax is not as foreign to a Java/C++/PHP programmer as Haskell, OCaml, SML, Lisps, etc..
But I do think Scala's popularity will plateau at less than where Java is today, because I also think the next mainstream language must be much simplified, and the only way I see to get there is pure immutability, i.e. declarative like HTML, but Turing complete. However, I am biased because I am developing such a language, but I only did so after ruling out over a several month study that Scala could not suffice for what I needed.
Is this going to give Scala a bad name in the commercial world as an academic plaything that only dedicated PhD students can understand? Are CTOs and heads of software going to get scared off?
I don't think Scala's reputation will suffer from the Haskell complex. But I think that some will put off learning it, because for most programmers, I don't yet see a use case that forces them to use Scala, and they will procrastinate learning about it. Perhaps the highly-scalable server side is the most compelling use case.
And, for the mainstream market, first learning Scala is not a "breath of fresh air", where one is writing programs immediately, such as first using HTML or Python. Scala tends to grow on you, after one learns all the details that one stumbles on from the start. However, maybe if I had read Programming in Scala from the start, my experience and opinion of the learning curve would have been different.
Was the library re-design a sensible idea?
Definitely.
If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens?
I am using Scala as the initial platform of my new language. I probably wouldn't be building code on Scala's collection library if I was using Scala commercially otherwise. I would create my own category theory based library, since the one time I looked, I found Scalaz's type signatures even more verbose and unwieldy than Scala's collection library. Part of that problem perhaps is Scala's way of implementing type classes, and that is a minor reason I am creating my own language.
I decided to write this answer, because I wanted to force myself to research and compare Scala's collection class design to the one I am doing for my language. Might as well share my thought process.
The 2.8 Scala collections use of a builder abstraction is a sound design principle. I want to explore two design tradeoffs below.
WRITE-ONLY CODE: After writing this section, I read Carl Smotricz's comment which agrees with what I expect to be the tradeoff. James Strachan and davetron5000's comments concur that the meaning of That (it is not even That[B]) and the mechanism of the implicit is not easy to grasp intuitively. See my use of monoid in issue #2 below, which I think is much more explicit. Derek Mahar's comment is about writing Scala, but what about reading the Scala of others that is not "in the common cases".
One criticism I have read about Scala, is that it is easier to write it, than read the code that others have written. And I find this to be occasionally true for various reasons (e.g. many ways to write a function, automatic closures, Unit for DSLs, etc), but I am undecided if this is major factor. Here the use of implicit function parameters has pluses and minuses. On the plus side, it reduces verbosity and automates selection of the builder object. In Odersky's example the conversion from a BitSet, i.e. Set[Int], to a Set[String] is implicit. The unfamiliar reader of the code might not readily know what the type of collection is, unless they can reason well about the all the potential invisible implicit builder candidates which might exist in the current package scope. Of course, the experienced programmer and the writer of the code will know that BitSet is limited to Int, thus a map to String has to convert to a different collection type. But which collection type? It isn't specified explicitly.
AD-HOC COLLECTION DESIGN: After writing this section, I read Tony Morris's comment and realized I am making nearly the same point. Perhaps my more verbose exposition will make the point more clear.
In "Fighting Bit Rot with Types" Odersky & Moors, two use cases are presented. They are the restriction of BitSet to Int elements, and Map to pair tuple elements, and are provided as the reason that the general element mapping function, A => B, must be able to build alternative destination collection types. However, afaik this is flawed from a category theory perspective. To be consistent in category theory and thus avoid corner cases, these collection types are functors, in which each morphism, A => B, must map between objects in the same functor category, List[A] => List[B], BitSet[A] => BitSet[B]. For example, an Option is a functor that can be viewed as a collection of sets of one Some( object ) and the None. There is no general map from Option's None, or List's Nil, to other functors which don't have an "empty" state.
There is a tradeoff design choice made here. In the design for collections library of my new language, I chose to make everything a functor, which means if I implement a BitSet, it needs to support all element types, by using a non-bit field internal representation when presented with a non-integer type parameter, and that functionality is already in the Set which it inherits from in Scala. And Map in my design needs to map only its values, and it can provide a separate non-functor method for mapping its (key,value) pair tuples. One advantage is that each functor is then usually also an applicative and perhaps a monad too. Thus all functions between element types, e.g. A => B => C => D => ..., are automatically lifted to the functions between lifted applicative types, e.g. List[A] => List[B] => List[C] => List[D] => .... For mapping from a functor to another collection class, I offer a map overload which takes a monoid, e.g. Nil, None, 0, "", Array(), etc.. So the builder abstraction function is the append method of a monoid and is supplied explicitly as a necessary input parameter, thus with no invisible implicit conversions. (Tangent: this input parameter also enables appending to non-empty monoids, which Scala's map design can't do.) Such conversions are a map and a fold in the same iteration pass. Also I provide a traversable, in the category sense, "Applicative programming with effects" McBride & Patterson, which also enables map + fold in a single iteration pass from any traversable to any applicative, where most every collection class is both. Also the state monad is an applicative and thus is a fully generalized builder abstraction from any traversable.
So afaics the Scala collections is "ad-hoc" in the sense that it is not grounded in category theory, and category theory is the essense of higher-level denotational semantics. Although Scala's implicit builders are at first appearance "more generalized" than a functor model + monoid builder + traversable -> applicative, they are afaik not proven to be consistent with any category, and thus we don't know what rules they follow in the most general sense and what the corner cases will be given they may not obey any category model. It is simply not true that adding more variables makes something more general, and this was one of huge benefits of category theory is it provides rules by which to maintain generality while lifting to higher-level semantics. A collection is a category.
I read somewhere, I think it was Odersky, as another justification for the library design, is that programming in a pure functional style has the cost of limited recursion and speed where tail recursion isn't used. I haven't found it difficult to employ tail recursion in every case that I have encountered so far.
Additionally I am carrying in my mind an incomplete idea that some of Scala's tradeoffs are due to trying to be both an mutable and immutable language, unlike for example Haskell or the language I am developing. This concurs with Tony Morris's comment about for comprehensions. In my language, there are no loops and no mutable constructs. My language will sit on top of Scala (for now) and owes much to it, and this wouldn't be possible if Scala didn't have the general type system and mutability. That might not be true though, because I think Odersky & Moors ("Fighting Bit Rot with Types") are incorrect to state that Scala is the only OOP language with higher-kinds, because I verified (myself and via Bob Harper) that Standard ML has them. Also appears SML's type system may be equivalently flexible (since 1980s), which may not be readily appreciated because the syntax is not so much similar to Java (and C++/PHP) as Scala. In any case, this isn't a criticism of Scala, but rather an attempt to present an incomplete analysis of tradeoffs, which is I hope germane to the question. Scala and SML don't suffer from Haskell's inability to do diamond multiple inheritance, which is critical and I understand is why so many functions in the Haskell Prelude are repeated for different types.
It seems necessary to state ones degree here: B.A. in Political Science and B.ed in Computer Science.
To the point:
Is this going to put people off coming to Scala?
Scala is difficult, because its underlying programming paradigm is difficult. Functional programming scares a lot of people. It is possible to build closures in PHP but people rarely do. So no, not this signature but all the rest will put people off, if they do not have the specific education to make them value the power of the underlying paradigm.
If this education is available, everyone can do it. Last year I build a chess computer with a bunch of school kids in SCALA! They had their problems but they did fine in the end.
If you're using Scala commercially, are you worried about this? Are you planning to adopt 2.8 immediately or wait to see what happens?
I would not be worried.
I have a maths degree from Oxford too! It took me a while to 'get' the new collections stuff. But I like it a lot now that I do. In fact, the typing of 'map' was one of the first big things that bugged me in 2.7 (perhaps since the first thing I did was subclass one of the collection classes).
Reading Martin's paper on the new 2.8 collections really helped explain the use of implicits, but yes the documentation itself definitely needs to do a better job of explaining the role of different kind of implicits within method signatures of core APIs.
My main concern is more this: when is 2.8 going to be released? When will the bug reports stop coming in for it? have scala team bitten off more than they can chew with 2.8 / tried to change too much at once?
I'd really like to see 2.8 stabilised for release as a priority before adding anything else new at all, and wonder (while watching from the sidelines) if some improvements could be made to the way the development roadmap for the scala compiler is managed.
What about error messages in use site?
And what about when comes the use case one needs to integrate existing types with a custom one that fits a DSL. One have to be well educated on matters of association, precedence, implicit conversions, implicit parameters, higher kinds, and maybe existential types.
It's very good to know that mostly it's simple but it's not necessarily enough. At least there must be one guy who knows this stuff if widespread library is to be designed.