Inverse of destructuring assignment: restructuring assignment? - coffeescript

In addition to it's handy syntax for destructuring assignment, CoffeeScript supports a similar syntax for constructing object literals:
a = 1
b = 2
o = {a, b}
> {a: 1, b: 2}
I couldn't find mention of this syntax anywhere, so I took to calling it restructuring assignment. Is there a conventional name for this construct? If not, what are others calling it?
Update
structuring expressions is my new name du jour.

This isn't related to assignment; this is just an addition to JavaScript's object literal syntax.
It's interesting that you seem to perceive it as "derived from" destructuring assignment, because in fact the opposite is the case: destructuring assignment comes from object literal notation, and isn't limited to the keyless subset you're describing. For example, {foo: asdf} = bar does exactly what you'd expect:
asdf = bar.foo
So that {a: b, b: a} = {a, b} is a very confusing way to write [a, b] = [b, a].
You can also write {#foo} to produce {foo: #foo}, which is another useful shorthand (and of course it works in destructuring assignment statements as well).
If you really need a name for it, "object literal key inference" might be more accurate.

Related

What does Array((1L, 2L)) do?

What does the following code do?
var pair: RDD[(Long,Long)] = sparkContext.parallelize(Array((3L, 4L)))
Does variable "pair" consist of a single element/pair? The types of 3 and 4 is "long" not "int".
It creates an Array[Tuple2[Long,Long]] with one element (3L, 4L) (aka Tuple2(3L,4L)).
Such case could be written (maybe more readable), as Array(3L -> 4L).
Also note it's better not to use mutable type like Array (rather Seq or List).
Let's decompose it:
Array(el1, el2, el3, ...) creates a new Array containing the given elements. In our case, there is a single element in the array: (3L,4L).
Writing (a,b) creates a Tuple2 value containing a and b of types A and B (not necessarily the same types). The Tuple2[A,B] type can also be written as (A,B) for clarity.
In our case, a=3L and b=4L. You might be confused by the L suffix after literals. It is there so that the compiler can differentiate between 3 as an Int and 3L as a Long. Scala also supports other such suffixes, like f or d for explicitly declaring Float or Double literals.
So you're indeed creating an Array[Tuple2[Long,Long]], also expressed as Array[(Long,Long)], which goes into your RDD.

Scala currying example in the tutorial is confusing me

I'm reading a bit about Scala currying here and I don't understand this example very much:
def foldLeft[B](z: B)(op: (B, A) => B): B
What is the [B] in square brackets? Why is it in brackets? The B after the colon is the return type right? What is the type?
It looks like this method has 2 parameter lists: one with a parameter named z and one with a parameter named op which is a function.
op looks like it takes a function (B, A) => B). What does the right side mean? It returns B?
And this is apparently how it is used:
val numbers = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
val res = numbers.foldLeft(0)((m, n) => m + n)
print(res) // 55
What is going on? Why wasn't the [B] needed when called?
In Scala documentation that A type (sometimes A1) is often the placeholder for a collection's element type. So if you have...
List('c','q','y').foldLeft( //....etc.
...then A becomes the reference for Char because the list is a List[Char].
The B is a placeholder for a 2nd type that foldLeft will have to deal with. Specifically it is the type of the 1st parameter as well as the type of the foldLeft result. Most of the time you actually don't need to specify it when foldLeft is invoked because the compiler will infer it. So, yeah, it could have been...
numbers.foldLeft[Int](0)((m, n) => m + n)
...but why bother? The compiler knows it's an Int and so does anyone reading it (anyone who knows Scala).
(B, A) => B is a function that takes 2 parameters, one of type B and one of type A, and produces a result of type B.
What is the [B] in square brackets?
A type parameter (that's also known as "generics" if you've seen something like Java before)
Why is it in brackets?
Because type parameters are written in brackets, that's just Scala syntax. Java, C#, Rust, C++ use angle brackets < > for similar purposes, but since arrays in Scala are accessed as arr(idx), and (unlike Haskell or Python) Scala does not use [ ... ] for list comprehensions, the square brackets could be used for type parameters, and there was no need for angular brackets (those are more difficult to parse anyway, especially in a language which allows almost arbitrary names for infix and postfix operators).
The B after the colon is the return type right?
Right.
What is the type?
Ditto. The return type is B.
It looks like this method has 2 parameter lists: one with a parameter named z and one with a parameter named op which is a function.
This method has a type parameter B and two argument lists for value arguments, correct. This is done to simplify the type inference: the type B can be inferred from the first argument z, so it does not have to be repeated when writing down the lambda-expression for op. This wouldn't work if z and op were in the same argument list.
op looks like it takes a function (B, A) => B.
The type of the argument op is (B, A) => B, that is, Function2[B, A, B], a function that takes a B and an A and returns a B.
What does the right side mean? It returns B?
Yes.
What is going on?
m acts as accumulator, n is the current element of the list. The fold starts with integer value 0, and then accumulates from left to right, adding up all numbers. Instead of (m, n) => m + n, you could have written _ + _.
Why wasn't the [B] needed when called?
It was inferred from the type of z. There are many other cases where the generic type cannot be inferred automatically, then you would have to specify the return type explicitly by passing it as an argument in the square brackets.
This is what is called polymorphism. The function can work on multiple types and you sometimes want to give a parameter of what type will be worked with. Basically the B is a type parameter and can either be given explicitly as a type, which would be Int and then it should be given in square brackets or implicitly in parentheses like you did with the 0. Read about polymorphism here Scala polymorphism

How to annotate types of expressions within a Scala for expression

I'm wondering how to (safely) add type ascriptions to the lefthand side of Scala for expressions.
When working with complex for expressions, the inferred types are often hard to follow, or can go off track, and it's desirable to annotate the expressions within the for {..} block, to check that you and the compiler agree. An illustrative code sketch that uses the Eff monad:
for {
e1: Option[(Int, Vector[Int])] <- complexCalc
e2 <- process(e1)
} yield e2
def complexCalc[E]: Eff[E, Option[(Int, Vector[Int])]] = ???
However, this conflicts with Scala's support for pattern matching on the LHS of a for-expression.
As discussed in an earlier question, Scala treats the type ascription : Option[(Int, Vector[Int])] as a pattern to be matched.
The compiler has the concept of irrefutable patterns, which always match. Conversely, a refutable pattern might not match, and so when used in for expression, the righthand-side expression must provide a withFilter method to handle the non-matching case.
Due to a longstanding compiler bug/limitation, tuples and other common constructor patterns seem never to be treated as irrefutable.
The net result is that adding a type ascription makes the LHS pattern seem refutable and requires a filter op to be defined on the RHS. A symptom is compiler errors like value filter is not a member of org.atnos.eff.Eff[E,Option[(Int, Vector[Int])]].
If there any syntactic form that can be used to ascribe types to the intermediate expressions containing tuples, within for expressions, without encountering refutable pattern matching?
Turns out there's extensive discussion of this problem, and the current best solution, in this Scalaz PR *.
The work around is to add the type ascription in an assignment on the line below:
for {
e <- complexCalc
e1: Option[(Int, Vector[Int])] = e
e2 <- process(e1)
} yield e2
def complexCalc[E]: Eff[E, Option[(Int, Vector[Int])]] = ???

Why can't I downcast a map to a type who's key is a subtype of the original map's key

Why does this fail:
val h = new collection.mutable.HashMap[Integer,String]()
val g:collection.mutable.HashMap[Number,String] = h
when this succeeds:
val i:Integer = 10
val n:Number = i
and
val g:collection.mutable.HashMap[Number,String] = h.asInstanceOf[ collection.mutable.HashMap[Number,String] ]
always has the correct behavior (for g).
h's key objects is guaranteed to have the same behavior as expected by g's keys, so it seems always safe to use h as a g. So why does the compiler disallow this cast?
Key types of Maps are not covariant. If you check scaladoc, the type declaration of immutable map is Map[A, +B] (notice A doesn't have a +). The reason is that suppose the compiler allowed your assignment from h to g, then it will be possible to write the following:
g.get(1.234)
since 1.234 is a Number. Yet g is really h, and h only expect the keys to be integers.
For similar reasons, if your map is mutable (as yours is), then the value type is not covariant either.

When to use parenthesis in Scala infix notation

When programming in Scala, I do more and more functional stuff. However, when using infix notation it is hard to tell when you need parenthesis and when you don't.
For example the following piece of code:
def caesar(k:Int)(c:Char) = c match {
case c if c isLower => ('a'+((c-'a'+k)%26)).toChar
case c if c isUpper => ('A'+((c-'A'+k)%26)).toChar
case _ => c
}
def encrypt(file:String,k:Int) = (fromFile(file) mkString) map caesar(k)_
The (fromFile(file) mkString) needs parenthesis in order to compile. When removed I get the following error:
Caesar.scala:24: error: not found: value map
def encrypt(file:String,k:Int) = fromFile(file) mkString map caesar(k)_
^
one error found
mkString obviously returns a string on which (by implicit conversion AFAIK)I can use the map function.
Why does this particular case needs parentheses? Is there a general guideline on when and why you need it?
This is what I put together for myself after reading the spec:
Any method which takes a single parameter can be used as an infix operator: a.m(b) can be written a m b.
Any method which does not require a parameter can be used as a postfix operator: a.m can be written a m.
For instance a.##(b) can be written a ## b and a.! can be written a!
Postfix operators have lower precedence than infix operators, so foo bar baz means foo.bar(baz) while foo bar baz bam means (foo.bar(baz)).bam and foo bar baz bam bim means (foo.bar(baz)).bam(bim).
Also given a parameterless method m of object a, a.m.m is valid but a m m is not as it would parse as exp1 op exp2.
Because there is a version of mkString that takes a single parameter it will be seen as an infix opertor in fromFile(file) mkString map caesar(k)_. There is also a version of mkString that takes no parameter which can be used a as postfix operator:
scala> List(1,2) mkString
res1: String = 12
scala> List(1,2) mkString "a"
res2: String = 1a2
Sometime by adding dot in the right location, you can get the precedence you need, e.g. fromFile(file).mkString map { }
And all that precedence thing happens before typing and other phases, so even though list mkString map function makes no sense as list.mkString(map).function, this is how it will be parsed.
The Scala reference mentions (6.12.3: Prefix, Infix, and Postfix Operations)
In a sequence of consecutive type infix operations t0 op1 t1 op2 . . .opn tn, all operators op1, . . . , opn must have the same associativity.
If they are all left-associative, the sequence is interpreted as (. . . (t0 op1 t1) op2 . . .) opn tn.
In your case, 'map' isn't a term for the operator 'mkstring', so you need grouping (with the parenthesis around 'fromFile(file) mkString')
Actually, Matt R comments:
It's not really an associativity issue, more that "Postfix operators always have lower precedence than infix operators. E.g. e1 op1 e2 op2 is always equivalent to (e1 op1 e2) op2". (Also from 6.12.3)
huynhjl's answer (upvoted) gives more details, and Mark Bush's answer (also upvoted) point to "A Tour of Scala: Operators" to illustrate that "Any method which takes a single parameter can be used as an infix operator".
Here's a simple rule: never used postfix operators. If you do, put the whole expression ending with the postfix operator inside parenthesis.
In fact, starting with Scala 2.10.0, doing that will generate a warning by default.
For good measure, you might want to move the postfix operator out, and use dot notation for it. For example:
(fromFile(file)).mkString map caesar(k)_
Or, even more simply,
fromFile(file).mkString map caesar(k)_
On the other hand, pay attention to the methods where you can supply an empty parenthesis to turn them into infix:
fromFile(file) mkString () map caesar(k)_
The spec doesn't make it clear, but my experience and experimentation has shown that the Scala compiler will always try to treat method calls using infix notation as infix operators. Even though your use of mkString is postfix, the compiler tries to interpret it as infix and so is trying to interpret "map" as its argument. All uses of postfix operators must either be immediately followed by an expression terminator or be used with "dot" notation for the compiler to see it as such.
You can get a hint of this (although it's not spelled out) in A Tour of Scala: Operators.