Destructuring assignment and default values - coffeescript

Coffescript supports default arguments for functions and destructuring assignments. Is there any way to combine those features?
Destructuring assignment for agruments
({name, age}) ->
"#{name} is #{age} years old?"
Default argument value
(name, age = 18) ->
"#{name} is #{age} years old?"
Something like
({name, age = 18}) -> #syntax error
"#{name} is #{age} years old?"

That is not supported, it is an open issue here: https://github.com/jashkenas/coffeescript/issues/1558. ES6 will support destructuring with defaults everywhere, but CS, alas no.
Seems like the closest you can get is to initialize on separate lines:
f = ({a, b})->
b ?= 2
console.log {a: a, b:b}
This tempting (but ugly) syntax was 'discouraged', and does not even work for me currently on recent versions of coffeescript, but I suppose YMMV.
f = ({a, b}, b=2)->
console.log {a: a, b:b}

Related

In Kotlin, I can override some existing operators but what about creating new operators?

In Kotlin, I see I can override some operators, such as + by function plus(), and * by function times() ... but for some things like Sets, the preferred (set theory) symbols/operators don't exist. For example A∩B for intersection and A∪B for union.
I can't seem to define my own operators, there is no clear syntax to say what symbol to use for an operator. For example if I want to make a function for $$ as an operator:
operator fun String.$$(other: String) = "$this !!whatever!! $other"
// or even
operator fun String.whatever(other: String) = "$this !!whatever!! $other" // how do I say this is the $$ symbol?!?
I get the same error for both:
Error:(y, x) Kotlin: 'operator' modifier is inapplicable on this function: illegal function name
What are the rules for what operators can be created or overridden?
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the idiomatic answers to commonly asked Kotlin topics are present in SO.
Kotlin only allows a very specific set of operators to be overridden and you cannot change the list of available operators.
You should take care when overriding operators that you try to stay in the spirit of the original operator, or of other common uses of the mathematical symbol. But sometime the typical symbol isn't available. For example set Union ∪ can easily treated as + because conceptually it makes sense and that is a built-in operator Set<T>.plus() already provided by Kotlin, or you could get creative and use an infix function for this case:
// already provided by Kotlin:
// operator fun <T> Set<T>.plus(elements: Iterable<T>): Set<T>
// and now add my new one, lower case 'u' is pretty similar to math symbol ∪
infix fun <T> Set<T>.u(elements: Set<T>): Set<T> = this.plus(elements)
// and therefore use any of...
val union1 = setOf(1,2,5) u setOf(3,6)
val union2 = setOf(1,2,5) + setOf(3,6)
val union3 = setOf(1,2,5) plus setOf(3,6)
Or maybe it is more clear as:
infix fun <T> Set<T>.union(elements: Set<T>): Set<T> = this.plus(elements)
// and therefore
val union4 = setOf(1,2,5) union setOf(3,6)
And continuing with your list of Set operators, intersection is the symbol ∩ so assuming every programmer has a font where letter 'n' looks ∩ we could get away with:
infix fun <T> Set<T>.n(elements: Set<T>): Set<T> = this.intersect(elements)
// and therefore...
val intersect = setOf(1,3,5) n setOf(3,5)
or via operator overloading of * as:
operator fun <T> Set<T>.times(elements: Set<T>): Set<T> = this.intersect(elements)
// and therefore...
val intersect = setOf(1,3,5) * setOf(3,5)
Although you can already use the existing standard library infix function intersect() as:
val intersect = setOf(1,3,5) intersect setOf(3,5)
In cases where you are inventing something new you need to pick the closest operator or function name. For example negating a Set of enums, maybe use - operator (unaryMinus()) or the ! operator (not()):
enum class Things {
ONE, TWO, THREE, FOUR, FIVE
}
operator fun Set<Things>.unaryMinus() = Things.values().toSet().minus(this)
operator fun Set<Things>.not() = Things.values().toSet().minus(this)
// and therefore use any of...
val current = setOf(Things.THREE, Things.FIVE)
println(-current) // [ONE, TWO, FOUR]
println(-(-current)) // [THREE, FIVE]
println(!current) // [ONE, TWO, FOUR]
println(!!current) // [THREE, FIVE]
println(current.not()) // [ONE, TWO, FOUR]
println(current.not().not()) // [THREE, FIVE]
Be thoughtful since operator overloading can be very helpful, or it can lead to confusion and chaos. You have to decide what is best while maintaining code readability. Sometimes the operator is best if it fits the norm for that symbol, or an infix replacement that is similar to the original symbol, or using a descriptive word so that there is no chance of confusion.
Always check the Kotlin Stdlib API Reference because many operators you want might already be defined, or have equivalent extension functions.
One other thing...
And about your $$ operator, technically you can do that as:
infix fun String.`$$`(other: String) = "$this !!whatever!! $other"
But because you need to escape the name of the function, it will be ugly to call:
val text = "you should do" `$$` "you want"
That isn't truly operator overloading and only would work if it is a function that can me made infix.

How to shuffle between Play! templates in Scala

I have a wrapper template that looks like this:
#(first: Html, second:Html, third:Html)
<div class="wrapper">
#first
#second
#third
</div>
I have three templates I want to shuffle and place as first, second and third.
Let's name them: views.html.a, views.html.b, views.html.c.
The controller code:
val a = views.html.a
val b = views.html.b
val c = views.html.c
val list = Random.shuffle(List(a, b, c)) // Will use Random.shuffle here but it fails complication either way
Ok(views.html.wrapper(list(0)(), list(1)(), list(2)()))
The complication error:
play.templates.BaseScalaTemplate[play.api.templates.HtmlFormat.Appendable,play.templates.Format[play.api.templates.HtmlFormat.Appendable]] does not take parameters
It appears as entering the object to the List and getting it out tricks the compiler.
If I don't use list and do:
Ok(views.html.wrapper(a(), b(), c()))
it works as expected and renders the page.
I know I can move the random logic to the wrapper template but I prefer to understand / fix the current implementation and learn some Scala in the process.
Thanks
EDIT
After reading serejja's answer, I'll add complexity to the question since this better represents the problem I'm facing.
The three templates need to take a boolean so views.html.a looks like:
#(checkMe:Boolean)
<div ...
So I can't use parentheses before the shuffle. Only after the shuffle occur I wish to send true false true as the booleans.
Using this approach:
Ok(views.html.wrapper(list(0)(true), list(1)(false), list(2)(true)))
gives the following compilation error:
play.templates.BaseScalaTemplate[play.api.templates.Html,play.templates.Format[play.api.templates.Html]] with play.api.templates.Template1[Boolean,play.api.templates.Html] does not take parameters
You were almost there :)
val a = views.html.a()
val b = views.html.b()
val c = views.html.c()
Notice the parentheses. The type of a, b and c now is play.api.templates.HtmlFormat.Appendable instead of the one before.
Now you can pass it as you wanted:
Ok(views.html.wrapper(list(0), list(1), list(2)))
EDIT:
Ok, I cannot imagine what you are up to (so that the solution could be simplified if possible) but I found a workaround.
First, consider that views a, b and c are on the one level of hierarchy:
/ a
BaseScalaTemplate - b
\ c
For this solution to work, these views must have the same number of parameters (a(check: Boolean), b(check: Boolean), c(check: Boolean)) so that they make a List[play.templates.BaseScalaTemplate[play.api.templates.Html,play.templates.Format[play.api.templates.Html]]
with play.api.templates.Template1[Boolean,play.api.templates.Html]] (which means "a generic template with one Boolean parameter").
play.api.templates.Template1 has a method render which takes that parameter and returns you a HtmlFormat.Appendable (which I mentioned earlier).
Considering this your solution might be like this:
val a = views.html.a
val b = views.html.b
val c = views.html.c
val randomizedViews = Random.shuffle(List(a, b, c))
Ok(views.html.wrapper(list(0).render(true), list(1).render(false), list(2).render(true)))
Note that this solution is far from being perfect and I'd suggest you not to use it in real life. I dont think views are intended to be used this way.

New Scope in Coq

I'd like my own scope, to play around with long distfixes.
Declare Scope my_scope.
Delimit Scope my_scope with my.
Open Scope my_scope.
Definition f (x y a b : nat) : nat := x+y+a+b.
Notation "x < y * a = b" := (f x y a b)
(at level 100, no associativity) : my_scope.
Check (1 < 2 * 3 = 4)%my.
How do you make a new scope?
EDIT: I chose "x < y * a = b" to override Coq's operators (each with a different precedence).
The command Declare Scope does not exist. The various commands about scopes are described in section 12.2 of the Coq manual.
Your choice of an example notation has inherent problems, because it clashes with pre-defined notations, which seem to be used before your notation.
When looking at the first components the parser sees _ < _ and thinks that you are actually talking about comparison of integers, then it sees the second part as being an instance of the notation _ * _, then it sees that all that is the left hand side of an equality. And all along the parser is happy, it constructs an expression of the form:
(1 < (2 * 3)) = 4
This is constructed by the parser, and the type system has not been solicited yet. The type checker sees a natural number as the first child of (_ < _) and is happy. It sees (_ * _) as the second child and it is happy, it now knows that the first child of that product should be a nat number and it is still happy; in the end it has an equality, and the first component of this equality is in type Prop, but the second component is in type nat.
If you type Locate "_ < _ * _ = _". the answer tells you that you did define a new notation. The problem is that this notation never gets used, because the parser always finds another notation it can use before. Understanding why a notation is preferred to another one requires more knowledge of parsing technology, as alluded to in Coq's manual, chapter 12, in the sentence (obscure to me):
Coq extensible parsing is performed by Camlp5 which is essentially a LL1 parser.
You have to choose the levels of the various variables, x, y, a, and b so that none of these variables will be able to match too much of the text. For instance, I tried defining a notation close to yours, but with a starting and an ending bracket (and I guess this simplifies the task greatly).
Notation "<< x < y * a = b >>" := (f x y a b)
(x at level 59, y at level 39, a at level 59) : my_scope.
The level of x is chosen to be lower than the level of =, the level of y is chosen to be lower than the level of *, the level of a is chosen to be lower than =. The levels were obtained by reading the answer of the command Print Grammar constr. It seems to work, as the following command is accepted.
Check << 1 < 2 * 3 = 4 >>.
But you may need to include a little more engineering to have a really good notation.
To answer the actual question in your title:
The new scope gets created when you declare a notation that uses it. That is, you don’t declare a new scope my_scope separately. You just write
Notation "x <<< y" := (f x y) (at level 100, no associativity) : my_scope.
and that declares a new scope my_scope.
The answers for this question only apply to older versions of Coq. I'm not sure when it started but in at least Coq 8.13.2, Coq prefers the user to first use Declare Scope create a new scope. What the OP has in their code is Coq's preferred way to declare scopes now.
See the current manual

Performance difference between functions and pattern matching in Mathematica

So Mathematica is different from other dialects of lisp because it blurs the lines between functions and macros. In Mathematica if a user wanted to write a mathematical function they would likely use pattern matching like f[x_]:= x*x instead of f=Function[{x},x*x] though both would return the same result when called with f[x]. My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach? Though part of me wouldn't be surprised if functions were actually transformed into some version of macros to allow features like Listable to be implemented.
The reason I care about this question is because of the recent set of questions (1) (2) about trying to catch Mathematica errors in large programs. If most of the computations were defined in terms of Functions, it seems to me that keeping track of the order of evaluation and where the error originated would be easier than trying to catch the error after the input has been rewritten by the successive application of macros/patterns.
The way I understand Mathematica is that it is one giant search replace engine. All functions, variables, and other assignments are essentially stored as rules and during evaluation Mathematica goes through this global rule base and applies them until the resulting expression stops changing.
It follows that the fewer times you have to go through the list of rules the faster the evaluation. Looking at what happens using Trace (using gdelfino's function g and h)
In[1]:= Trace#(#*#)&#x
Out[1]= {x x,x^2}
In[2]:= Trace#g#x
Out[2]= {g[x],x x,x^2}
In[3]:= Trace#h#x
Out[3]= {{h,Function[{x},x x]},Function[{x},x x][x],x x,x^2}
it becomes clear why anonymous functions are fastest and why using Function introduces additional overhead over a simple SetDelayed. I recommend looking at the introduction of Leonid Shifrin's excellent book, where these concepts are explained in some detail.
I have on occasion constructed a Dispatch table of all the functions I need and manually applied it to my starting expression. This provides a significant speed increase over normal evaluation as none of Mathematica's inbuilt functions need to be matched against my expression.
My understanding is that the first approach is something equivalent to a lisp macro and in my experience is favored because of the more concise syntax.
Not really. Mathematica is a term rewriter, as are Lisp macros.
So I have two questions, is there a performance difference between executing functions versus the pattern matching/macro approach?
Yes. Note that you are never really "executing functions" in Mathematica. You are just applying rewrite rules to change one expression into another.
Consider mapping the Sqrt function over a packed array of floating point numbers. The fastest solution in Mathematica is to apply the Sqrt function directly to the packed array because it happens to implement exactly what we want and is optimized for this special case:
In[1] := N#Range[100000];
In[2] := Sqrt[xs]; // AbsoluteTiming
Out[2] = {0.0060000, Null}
We might define a global rewrite rule that has terms of the form sqrt[x] rewritten to Sqrt[x] such that the square root will be calculated:
In[3] := Clear[sqrt];
sqrt[x_] := Sqrt[x];
Map[sqrt, xs]; // AbsoluteTiming
Out[3] = {0.4800007, Null}
Note that this is ~100× slower than the previous solution.
Alternatively, we might define a global rewrite rule that replaces the symbol sqrt with a lambda function that invokes Sqrt:
In[4] := Clear[sqrt];
sqrt = Function[{x}, Sqrt[x]];
Map[sqrt, xs]; // AbsoluteTiming
Out[4] = {0.0500000, Null}
Note that this is ~10× faster than the previous solution.
Why? Because the slow second solution is looking up the rewrite rule sqrt[x_] :> Sqrt[x] in the inner loop (for each element of the array) whereas the fast third solution looks up the value Function[...] of the symbol sqrt once and then applies that lambda function repeatedly. In contrast, the fastest first solution is a loop calling sqrt written in C. So searching the global rewrite rules is extremely expensive and term rewriting is expensive.
If so, why is Sqrt ever fast? You might expect a 2× slowdown instead of 10× because we've replaced one lookup for Sqrt with two lookups for sqrt and Sqrt in the inner loop but this is not so because Sqrt has the special status of being a built-in function that will be matched in the core of the Mathematica term rewriter itself rather than via the general-purpose global rewrite table.
Other people have described much smaller performance differences between similar functions. I believe the performance differences in those cases are just minor differences in the exact implementation of Mathematica's internals. The biggest issue with Mathematica is the global rewrite table. In particular, this is where Mathematica diverges from traditional term-level interpreters.
You can learn a lot about Mathematica's performance by writing mini Mathematica implementations. In this case, the above solutions might be compiled to (for example) F#. The array may be created like this:
> let xs = [|1.0..100000.0|];;
...
The built-in sqrt function can be converted into a closure and given to the map function like this:
> Array.map sqrt xs;;
Real: 00:00:00.006, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
...
This takes 6ms just like Sqrt[xs] in Mathematica. But that is to be expected because this code has been JIT compiled down to machine code by .NET for fast evaluation.
Looking up rewrite rules in Mathematica's global rewrite table is similar to looking up the closure in a dictionary keyed on its function name. Such a dictionary can be constructed like this in F#:
> open System.Collections.Generic;;
> let fns = Dictionary<string, (obj -> obj)>(dict["sqrt", unbox >> sqrt >> box]);;
This is similar to the DownValues data structure in Mathematica, except that we aren't searching multiple resulting rules for the first to match on the function arguments.
The program then becomes:
> Array.map (fun x -> fns.["sqrt"] (box x)) xs;;
Real: 00:00:00.044, CPU: 00:00:00.031, GC gen0: 0, gen1: 0, gen2: 0
...
Note that we get a similar 10× performance degradation due to the hash table lookup in the inner loop.
An alternative would be to store the DownValues associated with a symbol in the symbol itself in order to avoid the hash table lookup.
We can even write a complete term rewriter in just a few lines of code. Terms may be expressed as values of the following type:
> type expr =
| Float of float
| Symbol of string
| Packed of float []
| Apply of expr * expr [];;
Note that Packed implements Mathematica's packed lists, i.e. unboxed arrays.
The following init function constructs a List with n elements using the function f, returning a Packed if every return value was a Float or a more general Apply(Symbol "List", ...) otherwise:
> let init n f =
let rec packed ys i =
if i=n then Packed ys else
match f i with
| Float y ->
ys.[i] <- y
packed ys (i+1)
| y ->
Apply(Symbol "List", Array.init n (fun j ->
if j<i then Float ys.[i]
elif j=i then y
else f j))
packed (Array.zeroCreate n) 0;;
val init : int -> (int -> expr) -> expr
The following rule function uses pattern matching to identify expressions that it can understand and replaces them with other expressions:
> let rec rule = function
| Apply(Symbol "Sqrt", [|Float x|]) ->
Float(sqrt x)
| Apply(Symbol "Map", [|f; Packed xs|]) ->
init xs.Length (fun i -> rule(Apply(f, [|Float xs.[i]|])))
| f -> f;;
val rule : expr -> expr
Note that the type of this function expr -> expr is characteristic of term rewriting: rewriting replaces expressions with other expressions rather than reducing them to values.
Our program can now be defined and executed by our custom term rewriter:
> rule (Apply(Symbol "Map", [|Symbol "Sqrt"; Packed xs|]));;
Real: 00:00:00.049, CPU: 00:00:00.046, GC gen0: 24, gen1: 0, gen2: 0
We've recovered the performance of Map[Sqrt, xs] in Mathematica!
We can even recover the performance of Sqrt[xs] by adding an appropriate rule:
| Apply(Symbol "Sqrt", [|Packed xs|]) ->
Packed(Array.map sqrt xs)
I wrote an article on term rewriting in F#.
Some measurements
Based on #gdelfino answer and comments by #rcollyer I made this small program:
j = # # + # # &;
g[x_] := x x + x x ;
h = Function[{x}, x x + x x ];
anon = Table[Timing[Do[ # # + # # &[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
jj = Table[Timing[Do[ j[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
gg = Table[Timing[Do[ g[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
hh = Table[Timing[Do[ h[i], {i, k}]][[1]], {k, 10^5, 10^6, 10^5}];
ListLinePlot[ {anon, jj, gg, hh},
PlotStyle -> {Black, Red, Green, Blue},
PlotRange -> All]
The results are, at least for me, very surprising:
Any explanations? Please feel free to edit this answer (comments are a mess for long text)
Edit
Tested with the identity function f[x] = x to isolate the parsing from the actual evaluation. Results (same colors):
Note: results are very similar to this Plot for constant functions (f[x]:=1);
Pattern matching seems faster:
In[1]:= g[x_] := x*x
In[2]:= h = Function[{x}, x*x];
In[3]:= Do[h[RandomInteger[100]], {1000000}] // Timing
Out[3]= {1.53927, Null}
In[4]:= Do[g[RandomInteger[100]], {1000000}] // Timing
Out[4]= {1.15919, Null}
Pattern matching is also more flexible as it allows you to overload a definition:
In[5]:= g[x_] := x * x
In[6]:= g[x_,y_] := x * y
For simple functions you can compile to get the best performance:
In[7]:= k[x_] = Compile[{x}, x*x]
In[8]:= Do[k[RandomInteger[100]], {100000}] // Timing
Out[8]= {0.083517, Null}
You can use function recordSteps in previous answer to see what Mathematica actually does with Functions. It treats it just like any other Head. IE, suppose you have the following
f = Function[{x}, x + 2];
f[2]
It first transforms f[2] into
Function[{x}, x + 2][2]
At the next step, x+2 is transformed into 2+2. Essentially, "Function" evaluation behaves like an application of pattern matching rules, so it shouldn't be surprising that it's not faster.
You can think of everything in Mathematica as an expression, where evaluation is the process of rewriting parts of the expression in a predefined sequence, this applies to Function like to any other head

Scala closures on wikipedia

Found the following snippet on the Closure page on wikipedia
//# Return a list of all books with at least 'threshold' copies sold.
def bestSellingBooks(threshold: Int) = bookList.filter(book => book.sales >= threshold)
//# or
def bestSellingBooks(threshold: Int) = bookList.filter(_.sales >= threshold)
Correct me if I'm wrong, but this isn't a closure? It is a function literal, an anynomous function, a lambda function, but not a closure?
Well... if you want to be technical, this is a function literal which is translated at runtime into a closure, closing the open terms (binding them to a val/var in the scope of the function literal). Also, in the context of this function literal (_.sales >= threshold), threshold is a free variable, as the function literal itself doesn't give it any meaning. By itself, _.sales >= threshold is an open term At runtime, it is bound to the local variable of the function, each time the function is called.
Take this function for example, generating closures:
def makeIncrementer(inc: Int): (Int => Int) = (x: Int) => x + inc
At runtime, the following code produces 3 closures. It's also interesting to note that b and c are not the same closure (b == c gives false).
val a = makeIncrementer(10)
val b = makeIncrementer(20)
val c = makeIncrementer(20)
I still think the example given on wikipedia is a good one, albeit not quite covering the whole story. It's quite hard giving an example of actual closures by the strictest definition without actually a memory dump of a program running. It's the same with the class-object relation. You usually give an example of an object by defining a class Foo { ... and then instantiating it with val f = new Foo, saying that f is the object.
-- Flaviu Cipcigan
Notes:
Reference: Programming in Scala, Martin Odersky, Lex Spoon, Bill Venners
Code compiled with Scala version 2.7.5.final running on Java 1.6.0_14.
I'm not entirely sure, but I think you're right. Doesn't a closure require state (I guess free variables...)?
Or maybe the bookList is the free variable?
As far as I understand, this is a closure that contains a formal parameter, threshold and context variable, bookList, from the enclosing scope. So the return value(List[Any]) of the function may change while applying the filter predicate function. It is varying based on the elements of List(bookList) variable from the context.