CoffeeScript compatible fork with fixed scoping? - coffeescript

Is there a CoffeeScript fork with no changes other than fixed scoping, so that it's largely compatible with CoffeeScript (completely compatible if the code has no outer-variable assignments)? I'd consider acceptable ways to assign an outer variable to include := a la LiveScript, or nonlocal a la Python, or something else.

There's not. I think LiveScript is compatible with CoffeeScript, but that isn't anything near "a small patch".

Could be that Coco is your friend.
https://github.com/satyr/coco
I am going to try it myself now..
On its way to hide JavaScript's bad parts, CoffeeScript has
accumulated own quirks: horrible variable scope, awkward ranges,
confusing and/or pointless keywords, verbose file extension, and so
on. Coco tries to amend them, entwining good parts of both.
found in a list of coffee alternatives:
https://github.com/jashkenas/coffee-script/wiki/List-of-languages-that-compile-to-JS
EDIT:
There is still a caveat with the Coco solution, afaik.
The := syntax only helps you with assignments, not with read access.

Related

gcc precompiler directive __attribute__ ((__cleanup__)) vs ((cleanup)) (with vs without underscores?)

I'm learning about gcc's cleanup attribute, and learning how it calls a function to be run when a variable goes out of scope, and I don't understand why you can use the word "cleanup" with or without underscores. Where is the documentation for, or documentation of, the version with underscores?
The gcc documentation above shows it like this:
__attribute__ ((cleanup(cleanup_function)))
However, most code samples I read, show it like this:
__attribute__ ((__cleanup__(cleanup_function)))
Ex:
http://echorand.me/site/notes/articles/c_cleanup/cleanup_attribute_c.html
http://www.nongnu.org/avr-libc/user-manual/atomic_8h_source.html
Note that the first example link states they are identical, and of course coding it proves this, but how did he know this originally? Where did this come from?
Why the difference? Where is __cleanup__ defined or documented, as opposed to cleanup?
My fundamental problem lies in the fact that I don't know what I don't know, therefore I am trying to expose some of my unknown unknowns so they become known unknowns, until I can study them and make them known knowns.
My thinking is that perhaps there is some globally-applied principle to gcc preprocessor directives, where you can arbitrarily add underscores before or after any of them? -- Or perhaps only some of them? -- Or perhaps it modifies the preprocessor directive or attribute somehow and there are cases where one method, with or without the extra underscores, is preferred over the other?
You are allowed to define a macro cleanup, as it is not a name that is reserved to the compiler. You are not allowed to define one named __cleanup__. This guarantees that your code using __cleanup__ is unaffected by other code (provided that other code behaves, of course).
As https://gcc.gnu.org/onlinedocs/gcc/Attribute-Syntax.html#Attribute-Syntax explains:
You may optionally specify attribute names with __ preceding and following the name. This allows you to use them in header files without being concerned about a possible macro of the same name. For example, you may use the attribute name __noreturn__ instead of noreturn.
(But note that attributes are not preprocessor directives.)

Elisp strict mode

I am new to Emacs Lisp and the feeling is like it lacks strictness (and namespaces, and more...).
To be more comfortable with it I need a
way to make interpreter/byte compiler complain a lot if I use deprecated or obsolete function or variable (even better - hide them). Why this is not looks so simple and removing corresponding .el packages will not work is obvious - they may be needed by some legacy code.
Also, if it is possible, turning off all aliases would be nice. In my opinion they are there only for backwards compatibility, which I do not need. Because of setting this one globally can ruin something, I hope there is something like use strict in JavaScript, which can be applied to the inner body, so the effect is neatly localized.
Do not get me wrong, I think that global namespace of the Elisp is like a dump and if it could be any cleaner, why not?
To put in one sentence: how to make Elisp global namespace obsoleteless and deprecateless, as slim as possible?
I don't know of an Emacs Lisp linter that is built in to Emacs.
I do two things for my own code to try to ensure some level of cleanliness.
First, I make sure that byte-compiling the code doesn't give any errors or warnings. The byte compiler does a certain amount of checking.
Second, I enable lexical binding. This lets the byte compiler detect a few more possible warnings.
This is about the best you can do with the built-in tools. If you want to go further you could write your own tree walker to perform whatever other tests you like.

Perl does not have a simple #include, OK, but WHY?

Perl does not have a C style preprocessor level "include" function. That is how it is, and there are numerous sites that explain how to more or less emulate the same sort of behavior.
The one thing I couldn't find on any of these sites is any explanation for WHY perl does not have this functionality. Given that Perl often provides many different ways to accomplish the same thing, it is a curious omission.
Can somebody please explain why the decision was made to exclude this sort of functionality?
Perl already has require, do, eval and here documents among other things. It doesn't need a builtin preprocessor, if you need one that badly, there are filters. http://perldoc.perl.org/perlfilter.html
In general, nobody wants #include, even C and C++ programmers would mostly be happy to give it up in exchange for:
Faster compiles
Clean module system
#include is legacy, period. If a mainstream language designer announced tomorrow that they were adding #include to (your favorite language here) you'd probably see mass hysteria, laughing, and loss of confidence in that designer.
Language designers don't implement #include in any new language, there are simply better ways to do it. In general the trend is to attempt to achieve single pass lexing. Preprocessing requires you to incrementally expand #includes and potentially revisit the same characters repeatedly. It has been wrought with problems, and is one of the reasons that C++ is such dog to compile. It was ok in the 60s and 70s when memory and CPU were tiny and languages and problems were simpler, as were codebases. Nowadays, you want to be able to compile a "library" once, store its type metadata with it so the compiler can access it efficiently without rescanning it. That is what Microsoft does anyway with precompiled headers.
So what would #include be good for?
Modules ? No. See above. Modules are compiled once, export their metadata efficiently, they don't pollute the namespace of the clients, they don't recursively inject other includes, they can be distributed in binary form, among umpteen other advantages that I'm not even smart enough to think of.
Including macros ? No. Replace with constants, inlining and generic programming. All of which can be precompiled and expored from a module.
Splicing in generated code ? Better ways to do it anyway. See modules.
The only useful functionality for the preprocessor, IMO, is conditional compilation.
#ifdef _WIN32_
// do windowsy stuff
#else
#endif
Again, Perl can do this with do, eval or require as well.
Perl doesn't have or lack it any more than C does.
The C preprocessor was designed such that it and C need to know as little as possible about each other. There is no reason why you can't use it with Perl.
So why don't Perl programmers do it?
As codenhein explains, it's generally a bad idea to use an include mechanism with a compiler that don't know anything about each other, as it leaves you open to some crazy errors that neither can diagnose; the fact that C programmers are used to it doesn't change that.

Why are Perl source filters bad and when is it OK to use them?

It is "common knowledge" that source filters are bad and should not be used in production code.
When answering a a similar, but more specific question I couldn't find any good references that explain clearly why filters are bad and when they can be safely used. I think now is time to create one.
Why are source filters bad?
When is it OK to use a source filter?
Why source filters are bad:
Nothing but perl can parse Perl. (Source filters are fragile.)
When a source filter breaks pretty much anything can happen. (They can introduce subtle and very hard to find bugs.)
Source filters can break tools that work with source code. (PPI, refactoring, static analysis, etc.)
Source filters are mutually exclusive. (You can't use more than one at a time -- unless you're psychotic).
When they're okay:
You're experimenting.
You're writing throw-away code.
Your name is Damian and you must be allowed to program in latin.
You're programming in Perl 6.
Only perl can parse Perl (see this example):
#result = (dothis $foo, $bar);
# Which of the following is it equivalent to?
#result = (dothis($foo), $bar);
#result = dothis($foo, $bar);
This kind of ambiguity makes it very hard to write source filters that always succeed and do the right thing. When things go wrong, debugging is awkward.
After crashing and burning a few times, I have developed the superstitious approach of never trying to write another source filter.
I do occasionally use Smart::Comments for debugging, though. When I do, I load the module on the command line:
$ perl -MSmart::Comments test.pl
so as to avoid any chance that it might remain enabled in production code.
See also: Perl Cannot Be Parsed: A Formal Proof
I don't like source filters because you can't tell what code is going to do just by reading it. Additionally, things that look like they aren't executable, such as comments, might magically be executable with the filter. You (or more likely your coworkers) could delete what you think isn't important and break things.
Having said that, if you are implementing your own little language that you want to turn into Perl, source filters might be the right tool. However, just don't call it Perl. :)
It's worth mentioning that Devel::Declare keywords (and starting with Perl 5.11.2, pluggable keywords) aren't source filters, and don't run afoul of the "only perl can parse Perl" problem. This is because they're run by the perl parser itself, they take what they need from the input, and then they return control to the very same parser.
For example, when you declare a method in MooseX::Declare like this:
method frob ($bubble, $bobble does coerce) {
... # complicated code
}
The word "method" invokes the method keyword parser, which uses its own grammar to get the method name and parse the method signature (which isn't Perl, but it doesn't need to be -- it just needs to be well-defined). Then it leaves perl to parse the method body as the body of a sub. Anything anywhere in your code that isn't between the word "method" and the end of a method signature doesn't get seen by the method parser at all, so it can't break your code, no matter how tricky you get.
The problem I see is the same problem you encounter with any C/C++ macro more complex than defining a constant: It degrades your ability to understand what the code is doing by looking at it, because you're not looking at the code that actually executes.
In theory, a source filter is no more dangerous than any other module, since you could easily write a module that redefines builtins or other constructs in "unexpected" ways. In practice however, it is quite hard to write a source filter in a way where you can prove that its not going to make a mistake. I tried my hand at writing a source filter that implements the perl6 feed operators in perl5 (Perl6::Feeds on cpan). You can take a look at the regular expressions to see the acrobatics required to simply figure out the boundaries of expression scope. While the filter works, and provides a test bed to experiment with feeds, I wouldn't consider using it in a production environment without many many more hours of testing.
Filter::Simple certainly comes in handy by dealing with 'the gory details of parsing quoted constructs', so I would be wary of any source filter that doesn't start there.
In all, it really depends on the filter you are using, and how broad a scope it tries to match against. If it is something simple like a c macro, then its "probably" ok, but if its something complicated then its a judgement call. I personally can't wait to play around with perl6's macro system. Finally lisp wont have anything on perl :-)
There is a nice example here that shows in what trouble you can get with source filters.
http://shadow.cat/blog/matt-s-trout/show-us-the-whole-code/
They used a module called Switch, which is based on source filters. And because of that, they were unable to find the source of an error message for days.

Keeping CL and Scheme straight in your head

Depending on my mood I seem to waffle back and forth between wanting a Lisp-1 and a Lisp-2. Unfortunately beyond the obvious name space differences, this leaves all kinds of amusing function name/etc problems you run into. Case in point, trying to write some code tonight I tried to do (map #'function listvar) which, of course, doesn't work in CL, at all. Took me a bit to remember I wanted mapcar, not map. Of course it doesn't help when slime/emacs shows map IS defined as something, though obviously not the same function at all.
So, pointers on how to minimize this short of picking one or the other and sticking with it?
Map is more general than mapcar, for example you could do the following rather than using mapcar:
(map 'list #'function listvar)
How do I keep scheme and CL separate in my head? I guess when you know both languages well enough you just know what works in one and not the other. Despite the syntactic similarities they are quite different languages in terms of style.
Well, I think that as soon you get enough experience in both languages this becomes a non-issue (just with similar natural languages, like Italian and Spanish). If you usually program in one language and switch to the other only occasionally, then unfortunately you are doomed to write Common Lisp in Scheme or vice versa ;)
One thing that helps is to have a distinct visual environment for both languages, using syntax highlighting in some other colors etc. Then at least you will always know whether you are in Common Lisp or Scheme mode.
I'm definitely aware that there are syntactic differences, though I'm certainly not fluent enough yet to automatically use them, making the code look much more similar currently ;-).
And I had a feeling your answer would be the case, but can always hope for a shortcut <_<.
The easiest way to keep both languages straight is to do your thinking and code writing in Common Lisp. Common Lisp code can be converted into Scheme code with relative ease; however, going from Scheme to Common Lisp can cause a few headaches. I remember once where I was using a letrec in Scheme to store both variables and functions and had to split it up into the separate CL functions for the variable and function namespaces respectively.
In all practicality though I don't make a habit of writing CL code, which makes the times that I do have to all the more painful.