How to define convenient unit of measurement syntax in CoffeeScript? - coffeescript

In this example of using Groovy, the author describes how one can use Groovy tricks to define a syntax for units of measurements, such that you can write, e.g.,
3.cm + 12.m * 3 - 1.km
and have it work as expected. Is there any way to define a similarly clever syntax for associating units of measurement with numbers in CoffeeScript? (I'm very new to CoffeeScript; sorry if this is something that has already been solved or has an obvious answer.)

I think BasicWolf's answer is the most idiomatic, as you could have those functions in their own module and only import them only when you want to use them without having to pollute the global namespace or the JS builtin objects.
In Groovy, you can use a Category to avoid polluting the builtin classes with extra methods.
But, if you don't care about adding things to the builtin objects, you can go a step further and use Object.defineProperties to make the syntax of this exactly like Groovy's example :)
Object.defineProperties Number.prototype,
km: {get: -> # * 1000}
m: {get: -> #}
cm: {get: -> # * 0.01}
console.log 3.cm + 12.m * 3 - 1.km # -> -963.97

I wouldn't recommend it, but this works:
Number::cm = ->
this / 100
Number::m = ->
this
Number::km = ->
this * 1000
3.cm() + 12.m() * 3 - 1.km() # evaluates to -963.97
You can't get rid of the parentheses because 3.cm references the function cm instead of invoking it.

Unfortunately it is not possible to do it that way with CoffeeScript. What you can do is something like (in both CoffeeScript and JavaScript):
cm(3) + m(12) * 3 - km(1)
Here cm(), m(), km() are functions which convert the values to e.g. meters. In terms of CoffeeScript the following expression
(cm 3) + 3 * (m 12) - (km 1)
is also valid.

Related

How to translate this code into python 3?

This code is originally written in Python 2 and I need to translate it in python 3!
I'm sorry for not sharing enough information:
Also, here's the part where self.D was first assigned:
def __init__(self,instance,transformed,describe1,describe2):
self.D=[]
self.instance=instance
self.transformed=transformed
self.describe1,self.describe2=describe1,describe2
self.describe=self.describe1+', '+self.describe2 if self.describe2 else self.describe1
self.column_num=self.tuple_num=self.view_num=0
self.names=[]
self.types=[]
self.origins=[]
self.features=[]
self.views=[]
self.classify_id=-1
self.classify_num = 1
self.classes=[]
def generateViews(self):
T=map(list,zip(*self.D))
if self.transformed==0:
s= int( self.column_num)
for column_id in range(s):
f = Features(self.names[column_id],self.types[column_id],self.origins[column_id])
#calculate min,max for numerical,temporal
if f.type==Type.numerical or f.type==Type.temporal:
f.min,f.max=min(T[column_id]),max(T[column_id])
if f.min==f.max:
self.types[column_id]=f.type=Type.none
self.features.append(f)
continue
d={}
#calculate distinct,ratio for categorical,temporal
if f.type == Type.categorical or f.type == Type.temporal:
for i in range(self.tuple_num):
print([type(self.D[i]) for i in range(self.tuple_num)])
if self.D[i][column_id] in d:
d[self.D[i][column_id]]+=1
else:
d[self.D[i][column_id]]=1
f.distinct = len(d)
f.ratio = 1.0 * f.distinct / self.tuple_num
f.distinct_values=[(k,d[k]) for k in sorted(d)]
if f.type==Type.temporal:
self.getIntervalBins(f)
self.features.append(f)
TypeError: 'map' object is not subscriptable
The snippet you have given is not enough to solve the problem. The problem lies in self.D which you are trying to subscript using self.D[i]. Please look into your code where self.D is instantiated and make sure that its an array-like variable so that you can subscript it.
Edit
based on your edit, please confirm that whether self.D[i] is also array-like for all i in the range mentioned in the code. you can do that by simply
print([type(self.D[i]) for i in range(self.tuple_num))
share the response of this code, so that I may help further.
Edit-2
As per your comments and the edited code snippet, it seems that self.D is the output of some map function. In python 2, map is a function that returns a list. However, in python3 map is a class that when invoked, creates a map object.
The simplest way to resolve this is the find out the line where self.D was first assigned, and whatever code is in the RHS, wrap it with a list(...) function.
Alternately, just after this line
T=map(list,zip(*self.D))
add the following
self.D = list(self.D)
Hope this will resolve the issue
We don't have quite enough information to answer the question, but in Python 3, generator and map objects are not subscriptable. I think it may be in your
self.D[i]
variable, because you claim that self.D is a list, but it is possible that self.D[i] is a map object.
In your case, to access the indexes, you can convert it to a list:
list(self.D)[i]
Or use unpacking to implicitly convert to a list (this may be more condensed, but remember that explicit is better than implicit):
[*self.D[i]]

Why are macros based on abstract syntax trees better than macros based on string preprocessing?

I am beginning my journey of learning Rust. I came across this line in Rust by Example:
However, unlike macros in C and other languages, Rust macros are expanded into abstract syntax trees, rather than string preprocessing, so you don't get unexpected precedence bugs.
Why is an abstract syntax tree better than string preprocessing?
If you have this in C:
#define X(A,B) A+B
int r = X(1,2) * 3;
The value of r will be 7, because the preprocessor expands it to 1+2 * 3, which is 1+(2*3).
In Rust, you would have:
macro_rules! X { ($a:expr,$b:expr) => { $a+$b } }
let r = X!(1,2) * 3;
This will evaluate to 9, because the compiler will interpret the expansion as (1+2)*3. This is because the compiler knows that the result of the macro is supposed to be a complete, self-contained expression.
That said, the C macro could also be defined like so:
#define X(A,B) ((A)+(B))
This would avoid any non-obvious evaluation problems, including the arguments themselves being reinterpreted due to context. However, when you're using a macro, you can never be sure whether or not the macro has correctly accounted for every possible way it could be used, so it's hard to tell what any given macro expansion will do.
By using AST nodes instead of text, Rust ensures this ambiguity can't happen.
A classic example using the C preprocessor is
#define MUL(a, b) a * b
// ...
int res = MUL(x + y, 5);
The use of the macro will expand to
int res = x + y * 5;
which is very far from the expected
int res = (x + y) * 5;
This happens because the C preprocessor really just does simple text-based substitutions, it's not really an integral part of the language itself. Preprocessing and parsing are two separate steps.
If the preprocessor instead parsed the macro like the rest of the compiler, which happens for languages where macros are part of the actual language syntax, this is no longer a problem as things like precedence (as mentioned) and associativity are taken into account.

Are Scala closures as flexible as C++ lambdas?

I know the question seems a bit heretical. Indeed, having much appreciated lambdas in C++11, I was quite thrilled to learn a language which was built to support them from the beginning rather than as a contrived addition.
However, I cannot figure out how to do with Scala all I can do with C++11 lambdas.
Suppose I want to make a function which adds to a number passed as a parameter some value contained in a variable a. In C++, I can do both
int a = 5;
auto lambdaVal = [ a](int par) { return par + a; };
auto lambdaRef = [&a](int par) { return par + a; };
Now, if I change a, the second version will change its behavior; but the first will keep adding 5.
In Scala, if I do this
var a = 5
val lambdaOnly = (par:Int) => par + a
I essentially get the lambdaRef model: changing a will immediately change what the function does.
(which seems somewhat specious to me given that this time a isn't even mentioned in the declaration of the lambda, only in its code. But let it be)
Am I missing the way to obtain lambdaVal? Or do I have to first copy a to a val to be free to modify it afterwards without side effects?
The a in the function definition refers the variable a. If you want to use the current value of a when the lambda has been created, you have to copy the value like this:
val lambdaConst = {
val aNow = a
(par:Int) => par + aNow
}

Coding style - ordering of expressions in blocks in Scala

Since I began programming in Scala, I gravitated towards what seems to be a natural coding style in this language, which is easiest to explain with a simple example:
val a = {
def f1(p : Int) = ...
def f2(p : Int) = ...
f1(12) * f2(100)
}
As you can see, the multiplication of the values, which, if you want to understand the code, is the first operation you should want to familiarize yourself with, is not to be found until the last line. Instead, you need to read through the pieces of the puzzle first (functions f1, f2) before you can see how they're actually arranged. For me, this makes the code harder to read. How are you dealing with this problem - or maybe you don't find it a problem at all?
One interesting approach might be to use the untyped macro proposal in macro-paradise to introduce a where binding, such that:
val a = (f1(12) * f2(100)) where {
def f1(x : Int) = x + 1
def f2(x : Int) = x + 2
}
gets rewritten to your code above. As I understand it, untyped macros would allow the non-existent identifiers f1 and f2 to exist past the pre-macro typecheck. I think the rewrite should be relatively simple, and then the second typecheck would catch any problems. However, I've never actually written any macros, so it's possible there's something about this which would fail!
If it were possible, I think it would be quite a nice form to have (and rewriting would solve problems with execution order) - if I get some time I may have a stab at writing it!
Edit: I've had a go at writing this, and bits of it turn out surprisingly easy. Code is available on github. Unfortunately, the best I can do so far is:
val result = where ( f1(1) * f2(2), {
def f1(x : Int) = x + 1
def f2(x : Int) = x + 2
})
The problem is that Scala's infix operators are just method calls, and so I'd need to have something constructed on the expression (f1(1) * f2(2)) in order to invoke them. But that's the very expression which won't type properly before macro resolution, so I'm not quite sure what to do. Time for a new question, methinks!

Performance difference between functions and pattern matching in Mathematica

So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head