Adding Typescript to Coffeescript - coffeescript

I have a build chain setup that will convert a file from coffeescript to typescript to javascript. My question is: what is the most minimally intrusive way to add type signatures to a coffeescript function?
coffeescript supports raw javascript through backticks. However, that means coffeescript no longer understands the backtick snippet.
Coffeescript rejects these:
f = (`a:String`) -> a + 2
f = (a`:String`) -> a + 2
I can write this above the function:
`var f = (String) => any`
It compiles, but does not do the type-checking. I think this is because Coffeescript already declared the variable.
The only way I could figure out how to make it work requires a lot of boilerplate
f = (a) ->
`return (function(a:String){`
a + 2;
`})(a)`
Backticks do not seem to work properly in the new Coffeescript Redux compiler:
https://github.com/michaelficarra/CoffeeScriptRedux/issues/71
I am well aware that this is a dubious endeavor, it is just an experiement right now. I currently use contracts.coffee, but I am looking for actual types.

Here's my project which transpiles CoffeeScript into TypeScript and then merges it with a d.ts file containing types. Then reports compilation errors, if any.
Its called Compiled-Coffee.

If you want to write CoffeeScript, it is best to write CoffeeScript and compile to JavaScript.
The benefit of TypeScript is mostly design-time benefit and better tooling, so using it in the middle of CoffeeScript and JavaScript adds very little benefit as you will get design time and tooling based on your CoffeeScript code.
You can consume the libraries you write in CoffeeScript in TypeScript and vice-versa, so you can maintain your CoffeeScript libraries in CoffeeScript and consume them in your new TypeScript files while you decide which way to go.
Update: I'm not sure how there can be such a wide misinterpretation of this answer - I'm going to assume that I haven't explained this well (rather than assuming it is merely straw-man argument or hyper-sensitivity to language comparison).
TypeScript is indeed a type system for JavaScript. Static types are more use to you as a programmer earlier in the workflow. Having design-time warnings in your IDE means rapid correction of common errors like mis-typed variable names, incorrect parameters, invalid operations and a whole lot more. Having code underlined and annotated with an error means instant feedback. Having this at compile-time is good, but your feedback loop is longer. I won't even talk about run-time given that all types are erased by this point when using TypeScript.
As to all the "TypeScript vs CoffeeScript" comments - this question is not about that at all. The question is about compiling from CoffeeScript to TypeScript and then to JavaScript. Let's look at why this might not be ideal:
You will only get type feedback at compile time
You won't get auto-completion
Your CoffeeScript code will no longer be compact - it will have type annotations
Your CoffeeScript code will no longer be valid without your intermediate compiler
You will have to use an additional compiler and it will need to be in-step with CoffeeScript version x and TypeScript version y
Your IDE won't understand your CoffeeScript code

I think what I came up with is the best I can do. Things are harder in the new Coffeescript Redux compiler: it would actually be easier to try to hack the current coffeescript compiler to make this work.
The way of making this look less hacky is:
`var f : (a:Number) => Number = originalF`
However, typescript's weak type inference doesn't do that well with this form.
This gets proper type analysis:
f = (a) ->
`var a : Number = a`
a + 2
However, I am still not sure how to specify a return value with this form.

Typescript is a strong type javascript.
Coffee-script provides a more comfortable way of writing and reading.
I do not treat coffee-script as a language.
It's just a way, a style that can be attached to any language: Coffee Style Smart Computer Language should be the future
It's very ugly and stupid through backtick to 'support' the such strong type.
The correct way to implement the coffee-script with strong type:
Modify the CoffeeScriptRedux source to add the strong type supported
the TypedCoffeeScript has already done.
Modify the Typescript parser source to use coffee-script syntax.
It seems nobody do this.

Related

Is there a standard Swift AST like there is for JavaScript?

In JavaScript we have estree which is the AST definition evolving from Mozilla's implementation. But nowadays if you build an AST transformer in JS of a JS AST, you probably use this structure. Do we have anything like this for Swift, of the Swift AST?
I see we have a grammar, but what about an AST? I guess I can make one from that, but still.
If nothing standard, do we have any AST examples?
The Swift project offers SwiftSyntax as a package for working with Swift source code, in Swift. Under the hood, it's powered by the compiler's own libSyntax, written in C++.
Note that this currently isn't the representation that the Swift compiler proper uses for actually compiling Swift code — libSyntax focuses on source code itself for rewriting, formatting, transformations, etc., but is largely void of the semantic information that you may find in a compiler AST necessary for transforming the source into machine code. If you're just looking to operate on the AST without those semantics, this may be sufficient for your use case.
The repo README should have some info to get you started, an example, and some real-world use cases showing concrete usage of the library.

Idiomatic Rust plugin system

I want to outsource some code for a plugin system. Inside my project, I have a trait called Provider which is the code for my plugin system. If you activate the feature "consumer" you can use plugins; if you don't, you are an author of plugins.
I want authors of plugins to get their code into my program by compiling to a shared library. Is a shared library a good design decision? The limitation of the plugins is using Rust anyway.
Does the plugin host have to go the C way for loading the shared library: loading an unmangled function?
I just want authors to use the trait Provider for implementing their plugins and that's it.
After taking a look at sharedlib and libloading, it seems impossible to load plugins in a idiomatic Rust way.
I'd just like to load trait objects into my ProviderLoader:
// lib.rs
pub struct Sample { ... }
pub trait Provider {
fn get_sample(&self) -> Sample;
}
pub struct ProviderLoader {
plugins: Vec<Box<Provider>>
}
When the program is shipped, the file tree would look like:
.
├── fancy_program.exe
└── providers
├── fp_awesomedude.dll
└── fp_niceplugin.dll
Is that possible if plugins are compiled to shared libs? This would also affect the decision of the plugins' crate-type.
Do you have other ideas? Maybe I'm on the wrong path so that shared libs aren't the holy grail.
I first posted this on the Rust forum. A friend advised me to give it a try on Stack Overflow.
UPDATE 3/27/2018:
After using plugins this way for some time, I have to caution that in my experience things do get out of sync, and it can be very frustrating to debug (strange segfaults, weird OS errors). Even in cases where my team independently verified the dependencies were in sync, passing non-primitive structs between the dynamic library binaries tended to fail on OS X for some reason. I'd like to revisit this, find what cases it happens in, and perhaps open an issue with Rust, but I'm going to advise caution with this going forward.
LLDB and valgrind are near-essential to debug these issues.
Intro
I've been investigating things along these lines myself, and I've found there's little official documentation for this, so I decided to play around!
First let me note, as there is little official word on these properties please do not rely on any code here if you're trying to keep planes in the air or nuclear missiles from errantly launching, at least not without doing far more comprehensive testing than I've done. I'm not responsible if the code here deletes your OS and emails an erroneous tearful confession of committing the Zodiac killings to your local police; we're on the fringes of Rust here and things could change from one release or toolchain to another.
I have personally tested this on Rust 1.20 stable in both debug and release configurations on Windows 10 (stable-x86_64-pc-windows-msvc) and Cent OS 7 (stable-x86_64-unknown-linux-gnu).
Approach
The approach I took was a shared common crate both crates listed as a dependency defining common struct and trait definitions. At first, I was also going to test having a struct with the same structure, or trait with the same definitions, defined independently in both libraries, but I opted against it because it's too fragile and you wouldn't want to do it in a real design. That said, if anybody wants to test this, feel free to do a PR on the repository above and I will update this answer.
In addition, the Rust plugin was declared dylib. I'm not sure how compiling as cdylib would interact, since I think it would mean that upon loading the plugin there are two versions of the Rust standard library hanging around (since I believe cdylib statically links the Rust stdlib into the shared object).
Tests
General Notes
The structs I tested were not declared #repr(C). This could provide an extra layer of safety by guaranteeing a layout, but I was most curious about writing "pure" Rust plugins with as little "treating Rust like C" fiddling as possible. We already know you can use Rust via FFI by wrapping things in opaque pointers, manually dropping, and such, so it's not very enlightening to test this.
The function signature I used was pub fn foo(args) -> output with the #[no_mangle] directive, it turns out that rustfmt automatically changes extern "Rust" fn to simply fn. I'm not sure I agree with this in this case since they are most certainly "extern" functions here, but I will choose to abide by rustfmt.
Remember that even though this is Rust, this has elements of unsafety because libloading (or the unstable DynamicLib functionality) will not type check the symbols for you. At first I thought my Vec test was proving you couldn't pass Vecs between host and plugin until I realized on one end I had Vec<i32> and on the other I had Vec<usize>
Interestingly, there were a few times I pointed an optimized test build to an unoptimized plugin and vice versa and it still worked. However, I still can't in good faith recommending building plugins and host applications with different toolchains, and even if you do, I can't promise that for some reason rustc/llvm won't decide to do certain optimizations on one version of a struct and not another. In addition, I'm not sure if this means that passing types through FFI prevents certain optimizations such as Null Pointer Optimizations from occurring.
You're still limited to calling bare functions, no Foo::bar because of the lack of name mangling. In addition, due to the fact that functions with trait bounds are monomorphized, generic functions and structs are also out. The compiler can't know you're going to call foo<i32> so no foo<i32> is going to be generated. Any functions over the plugin boundary must take only concrete types and return only concrete types.
Similarly, you have to be careful with lifetimes for similar reasons, since there's no static lifetime checking Rust is forced to believe you when you say a function returns &'a when it's really &'b.
Native Rust
The first tests I performed were on no custom structures; just pure, native Rust types. This would give a baseline for if this is even possible. I chose three baseline types: &mut i32, &mut Vec, and Option<i32> -> Option<i32>. These were all chosen for very specific reasons: the &mut i32 because it tests a reference, the &mut Vec because it tests growing the heap from memory allocated in the host application, and the Option as a dual purpose of testing passing by move and matching a simple enum.
All three work as expected. Mutating the reference mutates the value, pushing to a Vec works properly, and the Option works properly whether Some or None.
Shared Struct Definition
This was meant to test if you could pass a non-builtin struct with a common definition on both sides between plugin and host. This works as expected, but as mentioned in the "General Notes" section, can't promise you Rust won't fail to optimize and/or optimize a structure definition on one side and not another. Always test your specific use case and use CI in case it changes.
Boxed Trait Object
This test uses a struct whose definition is only defined on the plugin side, but implements a trait defined in a common crate, and returns a Box<Trait>. This works as expected. Calling trait_obj.fun() works properly.
At first I actually anticipated there would be issues with dropping without making the trait explicitly have Drop as a bound, but it turns out Drop is properly called as well (this was verified by setting the value of a variable declared on the test stack via raw pointer from the struct's drop function). (Naturally I'm aware drop is always called even with trait objects in Rust, but I wasn't sure if dynamic libraries would complicate it).
NOTE:
I did not test what would happen if you load a plugin, create a trait object, then drop the plugin (which would likely close it). I can only assume this is potentially catastrophic. I recommend keeping the plugin open as long as the trait object persists.
Remarks
Plugins work exactly as you'd expect just linking a crate naturally, albeit with some restrictions and pitfalls. As long as you test, I think this is a very natural way to go. It makes symbol loading more bearable, for instance, if you only need to load a new function and then receive a trait object implementing an interface. It also avoids nasty C memory leaks because you couldn't or forgot to load a drop/free function. That said, be careful, and always test!
There is no official plugin system, and you cannot do plugins loaded at runtime in pure Rust. I saw some discussions about doing a native plugin system, but nothing is decided for now, and maybe there will never be any such thing. You can use one of these solutions:
You can extend your code with native dynamic libraries using FFI. To use the C ABI, you have to use repr(C), no_mangle attribute, extern etc. You will find more information by searching Rust FFI on the internets. With this solution, you must use raw pointers: they come with no safety guarantee (i.e. you must use unsafe code).
Of course, you can write your dynamic library in Rust, but to load it and call the functions, you must go through the C ABI. This means that the safety guarantees of Rust do not apply there. Furthermore, you cannot use the highest level Rust's functionalities as trait, enum, etc. between the library and the binary.
If you do not want this complexity, you can use a language adapted to expand Rust: with which you can dynamically add functions to your code and execute them with same guarantees as in Rust. This is, in my opinion, the easier way to go: if you have the choice, and if the execution speed is not critical, use this to avoid tricky C/Rust interfaces.
Here is a (not exhaustive) list of languages that can easily extend Rust:
Gluon, a functional language like Haskell
Dyon, a small but powerful scripting language intended for video games
Lua with rlua or hlua
You can also use Python or Javascript, or see the list in awesome-rust.

Scala source code definition of "def" and other built ins

I was doing some research of how to solve this question. However, I am wondering if I can start learning how the function works, or how they pass the argument into the local scope by reading the source code of scala.
I know the source code of scala is hosted in Github, my question is how to locate the definition of def.
Or more generally, how to locate the source code of certain built in functions, operators?
The source code for everything in the Scala standard library is under https://github.com/scala/scala/tree/2.11.x/src/library/scala.
Also, the Scaladoc for the standard library includes links to the source code. So e.g. if you're interested in scala.Option and you're looking at http://www.scala-lang.org/api/2.11.7/#scala.Option, notice that page has "Source: Option.scala" where "Option.scala" is hyperlinked to the source code.
For something like def, which is not part of the standard library, but part of the language, well... there is no single place where def itself is defined. The compiler has 25 phases (you can list them by running scalac -Xshow-phases) and basically every phase participates in the job of making def mean what it means.
If you want to understand def, you'd probably be better off reading the Scala Language Specification; it's highly technical, but still much more approachable than the source code for the compiler.
The part of the spec that addresses your question about named and default arguments is SLS 6.6.1.

Scala formatter - show named parameter

I have a relatively large Scala code base that does not use named parameters for any function/class calls. Rather than going in and manually entering it, which would be a very tedious process, I was looking at a formatter to do the job. The best I found is scalariform, but I'm not sure whether I can even write a rule for something so complex.
I'm curious if anyone has ran into a similar problem and found a powerful formatter.
The Scala Refactoring library might be something you could use. You will need some knowledge of Scala's Abstract Syntax Tree representation.
Why do you want to use named parameters throughout your code base? I like IntelliJ's default which is to suggest to name boolean arguments (only).

How to use CoffeeScript together with Google Closure

Recently I have started to use Google Closure Tools for my javascript development. Until now, I have used to write my code in CoffeeScript, however, the javascript generated by CoffeeScript seems to be incompatible with Google Closure Compiler's advanced mode.
Is there any extension to the CoffeeScript compiler adding Google Closure support?
There are various tools that aiming to make CoffeeScript usable with Google Closure Tools. I will describe three of them:
Bolinfest's CoffeeScript fork
Features:
Fixed function binding, loops, comprehensions, in operator and various other incompatibilities
Fixed classes syntax for Google Closure
Automatic generation of #constructor and #extends annotations
Automatically inserts goog.provide statement for each class declared
Python's like include namespace as alias support translated to goog.require and goog.scope
Drawbacks:
Constructor has to be the very first statement in the class
Cannot use short aliases for classes inside the class (i.e. class My.Long.Named.Car cannot be refered as Car in class definition as pure CoffeeScript allows)
User written JsDoc comments don't get merged with compiler generated ones
Missing provide equivalent for include
No support for type casting, this can be done only by inserting pure javascript code inside backticks "`"
Based on outdated CoffeeScript 1.0
Read more at http://bolinfest.com/coffee/
My CoffeeScript fork
Disclaimer: I am the author of this solution
This solution is inspired by the Bolinfest's work and extends it in these ways:
Constructor can be placed anywhere inside the class
Short aliases for classes work using goog.scope
User written JsDoc comments get merged with compiler generated, user written #constructor and #extends annotations are replaced by generated
Each namespace is provided or included mostly once, namespace, that is provided is never included. You can provide namespace by keyword provide
Support for typecasting using cast<typeToCastTo>(valueToBeCast) syntax
Based on CoffeeScript 1.6
Read more at https://github.com/hleumas/coffee-script/wiki
Steida's Coffee2Closure
Unlike the two solutions above, Steida's Coffee2Closure is postprocessor of javascript code generated by upstream nontweaked CoffeeScript. This approach has a one major advantage, that it will need no or only slight updates with continued development of CoffeeScript and still be actual. However, by the very nature of this approach, some of the features cannot be delivered. Currently it fixes only classes and bindings, loops, in operator and few other incompatibilities. It has no support for automatic annotation generation, type casting or custom keywords.
https://github.com/Steida/coffee2closure