How to use CoffeeScript together with Google Closure - coffeescript

Recently I have started to use Google Closure Tools for my javascript development. Until now, I have used to write my code in CoffeeScript, however, the javascript generated by CoffeeScript seems to be incompatible with Google Closure Compiler's advanced mode.
Is there any extension to the CoffeeScript compiler adding Google Closure support?

There are various tools that aiming to make CoffeeScript usable with Google Closure Tools. I will describe three of them:
Bolinfest's CoffeeScript fork
Features:
Fixed function binding, loops, comprehensions, in operator and various other incompatibilities
Fixed classes syntax for Google Closure
Automatic generation of #constructor and #extends annotations
Automatically inserts goog.provide statement for each class declared
Python's like include namespace as alias support translated to goog.require and goog.scope
Drawbacks:
Constructor has to be the very first statement in the class
Cannot use short aliases for classes inside the class (i.e. class My.Long.Named.Car cannot be refered as Car in class definition as pure CoffeeScript allows)
User written JsDoc comments don't get merged with compiler generated ones
Missing provide equivalent for include
No support for type casting, this can be done only by inserting pure javascript code inside backticks "`"
Based on outdated CoffeeScript 1.0
Read more at http://bolinfest.com/coffee/
My CoffeeScript fork
Disclaimer: I am the author of this solution
This solution is inspired by the Bolinfest's work and extends it in these ways:
Constructor can be placed anywhere inside the class
Short aliases for classes work using goog.scope
User written JsDoc comments get merged with compiler generated, user written #constructor and #extends annotations are replaced by generated
Each namespace is provided or included mostly once, namespace, that is provided is never included. You can provide namespace by keyword provide
Support for typecasting using cast<typeToCastTo>(valueToBeCast) syntax
Based on CoffeeScript 1.6
Read more at https://github.com/hleumas/coffee-script/wiki
Steida's Coffee2Closure
Unlike the two solutions above, Steida's Coffee2Closure is postprocessor of javascript code generated by upstream nontweaked CoffeeScript. This approach has a one major advantage, that it will need no or only slight updates with continued development of CoffeeScript and still be actual. However, by the very nature of this approach, some of the features cannot be delivered. Currently it fixes only classes and bindings, loops, in operator and few other incompatibilities. It has no support for automatic annotation generation, type casting or custom keywords.
https://github.com/Steida/coffee2closure

Related

Is there a standard Swift AST like there is for JavaScript?

In JavaScript we have estree which is the AST definition evolving from Mozilla's implementation. But nowadays if you build an AST transformer in JS of a JS AST, you probably use this structure. Do we have anything like this for Swift, of the Swift AST?
I see we have a grammar, but what about an AST? I guess I can make one from that, but still.
If nothing standard, do we have any AST examples?
The Swift project offers SwiftSyntax as a package for working with Swift source code, in Swift. Under the hood, it's powered by the compiler's own libSyntax, written in C++.
Note that this currently isn't the representation that the Swift compiler proper uses for actually compiling Swift code — libSyntax focuses on source code itself for rewriting, formatting, transformations, etc., but is largely void of the semantic information that you may find in a compiler AST necessary for transforming the source into machine code. If you're just looking to operate on the AST without those semantics, this may be sufficient for your use case.
The repo README should have some info to get you started, an example, and some real-world use cases showing concrete usage of the library.

What does #pragma("vm:prefer-inline") mean in Flutter?

I have seen a lot of code, like following, in frameworks.
What does the #pragma annotation do?
#pragma("vm:entry-point", "call")
#pragma("vm:entry-point", "set")
#pragma("vm:entry-point", "get")
#pragma("vm:prefer-inline")`
......
For the general meaning of #pragma cfr above the NKSM's answer.
I will cite two important specific use case:
#pragma('vm:prefer-inline') to inline functions
(I mean compile-time inlining, which has nothing to do with 'inline function' intended as a synonym for closure/lambda, as is sometimes done)
This annotation is similar to inline keyword in Kotlin
#pragma("vm:entry-point") to mark a function (or other entities, as classes) to indicate to the compiler that it will be used from native code. Without this annotation, the dart compiler could strip out unused functions, inline them, shrink names, etc, and the native code would fail to call it.
A very good doc (written much more clearly than usual) about entry-point is https://github.com/dart-lang/sdk/blob/master/runtime/docs/compiler/aot/entry_point_pragma.md
If you want to do a first test with inlining in dart, I suggest you compile with dart2js, which outputs fairly readable javascript code (at least until you increase the shrinking level beyond the default one; and readability is obviously decent only in minimal programs). However, inlining in dart/js requires a slightly different #pragma annotation: #pragma ('dart2js: tryInline').
An interesting discussion about inlining in dart can be found in
dart-lang issue #40522 - Annotation for method inlining
In general, I suggest Mraleph blog. His latest article is on benchmarking in dart, and also shows the use of #pragma(vm:entry-point). Mraleph is a Dart Sdk developer (he is also an author of the official doc cited above), and it is a very precious source about Dart VM related topics.
Flutter uses the Dart programming language.
Pragma class is a hint to tools.
Tools that work with Dart programs may accept hints to guide their behavior as pragma annotations on declarations. Each tool decides which hints it accepts, what they mean, and whether and how they apply to sub-parts of the annotated entity.
Tools that recognize pragma hints should pick a pragma prefix to identify the tool. They should recognize any hint with a name starting with their prefix followed by : as if it was intended for that tool. A hint with a prefix for another tool should be ignored (unless compatibility with that other tool is a goal).
A tool may recognize unprefixed names as well, if they would recognize that name with their own prefix in front.
If the hint can be parameterized, an extra options object can be added as well.
For example:
#pragma('Tool:pragma-name', [param1, param2, ...])
class Foo { }
#pragma('OtherTool:other-pragma')
void foo() { }
Here class Foo is annotated with a Tool specific pragma 'pragma-name'
and function foo is annotated with a pragma 'other-pragma' specific to
OtherTool.
The above can be found on dart.dev documentation.
The #pragma('vm:entry-point') annotation here. Its core logic is
Tree-Shaking. In AOT (Ahead of Time) compilation, if it cannot be
called by the Main entry of the application, it will be discarded as
useless code. The injection logic of the AOP code is non-invasive, so
obviously it will not be called by the Main entry. Therefore, this
annotation is required to tell the compiler not to discard this logic.

Is there an alternative to the deprecated enclosingClass method in Scala refelection library?

I am writing a macro to get the enclosing val/var definition. I can get the enclosing val/var symbol, but I can not get the defining tree. One solution here suggested using enclosingClass:
https://stackoverflow.com/a/18451114/11989864
But all the enclosing-tree style API is deprecated:
https://www.scala-lang.org/api/2.13.0/scala-reflect/scala/reflect/macros/blackbox/Context.html
Is there a way to implement the functionality of enclosingClass? Or to get a tree from a symbol?
Reasons for deprecation are
Starting from Scala 2.11.0, the APIs to get the trees enclosing by
the current macro application are deprecated, and the reasons for that
are two-fold. Firstly, we would like to move towards the philosophy of
locally-expanded macros, as it has proven to be important for
understanding of code. Secondly, within the current architecture of
scalac, we are unable to have c.enclosingTree-style APIs working
robustly. Required changes to the typechecker would greatly exceed the
effort that we would like to expend on this feature given the
existence of more pressing concerns at the moment. This is somewhat
aligned with the overall evolution of macros during the 2.11
development cycle, where we played with c.introduceTopLevel and
c.introduceMember, but at the end of the day decided to reject them.
If you're relying on the now deprecated APIs, consider using the new
c.internal.enclosingOwner method that can be used to obtain the names
of enclosing definitions. Alternatively try reformulating your macros
in terms of completely local expansion...
https://www.scala-lang.org/api/2.13.0/scala-reflect/scala/reflect/macros/Enclosures.html
Regarding getting a tree from a symbol
there's no standard way to go from a symbol to a defining tree
https://stackoverflow.com/a/13768595/5249621
Why do you need def macro to get the enclosing val/var definition?
Maybe macro annotatations can be enough
https://docs.scala-lang.org/overviews/macros/annotations.html

What types of Macros/Syntax Extensions/Compiler Plugins are there?

I am very confused by the many terms used for several macro-like things in the Rust ecosystem. Could someone clarify what macros/syntax extensions/compiler plugins there are as well as explain the relationship between those terms?
You are right: it is confusing. Especially, because most of those features are unstable and change fairly often. But I'll try to summarize the current situation (December 2016).
Let's start with the Syntax Extension: it's something that has to be "called" or annotated manually in order to have any effect. There are three kinds of syntax extensions, which differ in the way you annotate them:
function-like syntax extensions: these are probably the most common syntax extensions, also called "macros". The syntax for invoking them is foo!(…) or (and this is pretty rare) foo! some_ident (…), where foo is the macro's name. Note that the () parenthesis can be replaced by [] or {}. Function-like syntax extensions can be defined either as a "macro by example" or as a "procedural macro".
attribute-like syntax extensions: these are invoked like #[foo(…)] where the parenthesis are not necessary and, again, foo is the name of the syntax extension. The item the attribute is belonging to can then be modified or extended by additional items (decorator).
custom derives: most Rust-programmers have already used the #[derive(…)] attribute. Of course, derive itself can be seen as attribute-like syntax extension. But it can also be extended, which is then invoked like #[derive(Foo)], where Foo is the name of the custom derive.
Most of these syntax extensions are also "compiler plugins". The only exception are function-like syntax extensions which are defined via "macro by example" (meaning macro_rules! syntax). Macros by example can be defined in your source code without writing a compiler plugin whatsoever.
But there are also compiler plugins that aren't syntax extensions. Those types of compiler plugins are linters or other plugins which run some code at some stage of the compiling process. They don't need to be invoked manually: once loaded, the compiler will call them at certain points during compilation.
All compiler plugins need to be loaded – either via #![plugin(foo)] at the crate-root or with the -Zextra-plugins=too,bar command line parameter – before they can have any effect!
Compiler plugins are currently unstable, therefore you need a nightly-compiler to use them. But the "Macro 1.1"-RFC will probably be stabilized soon, which means that a small subsets of compiler plugins can then be used with the stable compiler.
Useful links:
Documentation about registering compiler plugins
Book Chapter about compiler plugins

Adding Typescript to Coffeescript

I have a build chain setup that will convert a file from coffeescript to typescript to javascript. My question is: what is the most minimally intrusive way to add type signatures to a coffeescript function?
coffeescript supports raw javascript through backticks. However, that means coffeescript no longer understands the backtick snippet.
Coffeescript rejects these:
f = (`a:String`) -> a + 2
f = (a`:String`) -> a + 2
I can write this above the function:
`var f = (String) => any`
It compiles, but does not do the type-checking. I think this is because Coffeescript already declared the variable.
The only way I could figure out how to make it work requires a lot of boilerplate
f = (a) ->
`return (function(a:String){`
a + 2;
`})(a)`
Backticks do not seem to work properly in the new Coffeescript Redux compiler:
https://github.com/michaelficarra/CoffeeScriptRedux/issues/71
I am well aware that this is a dubious endeavor, it is just an experiement right now. I currently use contracts.coffee, but I am looking for actual types.
Here's my project which transpiles CoffeeScript into TypeScript and then merges it with a d.ts file containing types. Then reports compilation errors, if any.
Its called Compiled-Coffee.
If you want to write CoffeeScript, it is best to write CoffeeScript and compile to JavaScript.
The benefit of TypeScript is mostly design-time benefit and better tooling, so using it in the middle of CoffeeScript and JavaScript adds very little benefit as you will get design time and tooling based on your CoffeeScript code.
You can consume the libraries you write in CoffeeScript in TypeScript and vice-versa, so you can maintain your CoffeeScript libraries in CoffeeScript and consume them in your new TypeScript files while you decide which way to go.
Update: I'm not sure how there can be such a wide misinterpretation of this answer - I'm going to assume that I haven't explained this well (rather than assuming it is merely straw-man argument or hyper-sensitivity to language comparison).
TypeScript is indeed a type system for JavaScript. Static types are more use to you as a programmer earlier in the workflow. Having design-time warnings in your IDE means rapid correction of common errors like mis-typed variable names, incorrect parameters, invalid operations and a whole lot more. Having code underlined and annotated with an error means instant feedback. Having this at compile-time is good, but your feedback loop is longer. I won't even talk about run-time given that all types are erased by this point when using TypeScript.
As to all the "TypeScript vs CoffeeScript" comments - this question is not about that at all. The question is about compiling from CoffeeScript to TypeScript and then to JavaScript. Let's look at why this might not be ideal:
You will only get type feedback at compile time
You won't get auto-completion
Your CoffeeScript code will no longer be compact - it will have type annotations
Your CoffeeScript code will no longer be valid without your intermediate compiler
You will have to use an additional compiler and it will need to be in-step with CoffeeScript version x and TypeScript version y
Your IDE won't understand your CoffeeScript code
I think what I came up with is the best I can do. Things are harder in the new Coffeescript Redux compiler: it would actually be easier to try to hack the current coffeescript compiler to make this work.
The way of making this look less hacky is:
`var f : (a:Number) => Number = originalF`
However, typescript's weak type inference doesn't do that well with this form.
This gets proper type analysis:
f = (a) ->
`var a : Number = a`
a + 2
However, I am still not sure how to specify a return value with this form.
Typescript is a strong type javascript.
Coffee-script provides a more comfortable way of writing and reading.
I do not treat coffee-script as a language.
It's just a way, a style that can be attached to any language: Coffee Style Smart Computer Language should be the future
It's very ugly and stupid through backtick to 'support' the such strong type.
The correct way to implement the coffee-script with strong type:
Modify the CoffeeScriptRedux source to add the strong type supported
the TypedCoffeeScript has already done.
Modify the Typescript parser source to use coffee-script syntax.
It seems nobody do this.