I'm new at CoffeeScript and I'm having trouble understanding some syntax of it.
For example, in this function call:
e('')
.color('rgb(255,0,0)')
.attr( x: 20,
y: 100,
w: 10,
h: 100 )
I'm expecting this to compile JS code that passes an object with keys x, y, w, h to attr method. But this code actually compiles to this:
e('').color('rgb(255,0,0)').attr({
x: 20
}, {
y: 100,
w: 10,
h: 100
});
It's passing two objects to attr, first with key x, and second with keys y, w, and h. I'm having trouble understanding why is x separated from other keys, but other keys are not separated from each other?
Since I want to pass attr method one object, I tried this:
e('')
.color('rgb(255,0,0)')
.attr({x: 20,
y: 100,
w: 10,
h: 100})
But this gives me that compile error in the line that y: 100 gets place: Error: Parse error on line 4: Unexpected '{'. The strange thing is, there is no { in line 4. I also tried removing parens in attr but still got the same error.
I can solved it with this:
e('')
.color('rgb(255,0,0)')
.attr(
x: 20,
y: 100,
w: 10,
h: 100)
If I remove the newline after .attr(, then I'm getting the same code in my first example, which is not what I want.
Now I'm wondering if I'm misunderstanding some points in CoffeeScript syntax or there are really strange stuff in it. Or did I catch a bug in CoffeeScript? Any ideas?
I'm using CoffeeScript 1.3.1
In coffeescript whitespace is significant. You can't just line things up whereever you think they should go. Try something like this:
e('')
.color('rgb(255,0,0)')
.attr(
x: 20
y: 100
w: 10
h: 100
)
Edit: If you want to have x on the same line as the method call you just need to indent properly:
e('')
.color('rgb(255,0,0)')
.attr(x: 20,
y: 100,
w: 10,
h: 100)
This is what you're looking for:
e('')
.color('rgb(255,0,0)')
.attr
x:20
y:100
w:10
h:100
which compiles to:
e('').color('rgb(255,0,0)').attr({
x: 20,
y: 100,
w: 10,
h: 100
});
remember, CoffeeScript is all about simplicity and avoiding curly braces and commas...
The obvious solution would be to put your object on one line, as in .attr({x: 20, y: 100, w: 10, h: 100}). (Haven't tested it though, but I don't see why it would not work).
While you can sometimes not use the brackets, I prefer to use them in functions calls as I find it more readable.
Related
I expect the following code output with Seq(0), instead it returns a function ?
# Seq(0).orElse(Seq(1))
res2: PartialFunction[Int, Int] = <function1>
I suspected at first that via syntax sugar it orElse on apply function, but it didn't since by trying:
# Seq(0).apply.orElse(Seq(1))
cmd3.sc:1: missing argument list for method apply in trait SeqLike
....(omit)
I checked in IntellJ that there's no implicit conversion.
What happens?
EDIT:
what I wish is:
Seq.empty.orElse(Seq(1)) == Seq(1)
Seq(0).orElse(Seq(1)) == Seq(0)
thanks #AndreyTyukin answer.
In one line, orElse has different semantic in different type , now Seq inherits PartialFunction not Option, so does the orElse behavior.
The Seq(0) is treated as a PartialFunction that is defined only at index 0, and produces as result the constant value 0 if it is given the only valid input 0.
When you invoke orElse with Seq(1), a new partial function is constructed, that first tries to apply Seq(0), and if it finds nothing in the domain of definition of Seq(0), it falls back to Seq(1). Since the domain of Seq(1) is the same as the domain of Seq(0) (namely just the {0}), the orElse does essentially nothing in this case, and returns a partial function equivalent to Seq(0).
So, the result is again a partial function defined at 0 that gives 0 if it is passed the only valid input 0.
Here is a non-degenerate example with sequences of different length, which hopefully makes it easier to understand what the orElse method is for:
val f = Seq(1,2,3).orElse(Seq(10, 20, 30, 40, 50))
is a partial function:
f: PartialFunction[Int,Int] = <function1>
Here is how it maps values 0 to 4:
0 to 4 map f
// Output: Vector(1, 2, 3, 40, 50)
That is, it uses first three values from the first sequence, and falls back to the second sequence passed to orElse for inputs 3 and 4.
This also works with arbitrary partial functions, not only sequences:
scala> val g = Seq(42,43,44).orElse[Int, Int]{ case n => n * n }
g: PartialFunction[Int,Int] = <function1>
scala> 0 to 10 map g
res7 = Vector(42, 43, 44, 9, 16, 25, 36, 49, 64, 81, 100)
If you wanted to select between two sequences without treating them as partial functions, you might consider using
Option(Seq(0)).getOrElse(Seq(1))
This will return Seq(0), if this is what you wanted.
Is there any way of doing the following in Scala?
Say I have an array of Double of size 15:
[10,20,30,40,50,60,70,80,Double.NaN,Double.NaN,110,120,130,140,150]
I would like to replace all the Double.NaN (from left to right) by the average of the last four values in the array using map reduce. So the first Double.NaN gets replaced by 60, and the next Double.NaN is replaced by 64 (i.e., the previously calculated 60 at the index 8 is used in this calculation).
So far I have used function type parameters to get the positions of the Double.NaN.
I'm not sure what exactly you mean by "map-reduce" in this case. It looks rather like a use-case for scanLeft:
import scala.collection.immutable.Queue
val input = List[Double](
10,20,30,40,50,60,70,80,Double.NaN,
Double.NaN,110,120,130,140,150
)
val patched = input.
scanLeft((Queue.fill(5)(0d), 0d)){
case ((q, _), x) => {
val y = if (x.isNaN) q.sum / 5 else x;
(q.dequeue._2.enqueue(y), y)
}
}.unzip._2.tail
Creates result:
List(10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 60.0, 64.0, 110.0, 120.0, 130.0, 140.0, 150.0)
In general, unless the gaps are "rare", this would not work with a typical map-reduce workflow, because
Every value in the resulting list can depend on arbitrary many values to the left of it, therefore you cannot cut the dataset up in independent blocks and map them independently.
You are not reducing anything, you want a patched list back
If you are not mapping and not reducing, I wouldn't call it "map-reduce".
By the way: the above code works for any (positive integer) value of "5".
Note that averaging the last four values of the first NaN from the given example (50,60,70,80) gives 65, not 60. The last five will give 60.
Does it have to be a map-reduce? How about a fold?
(List[Double]() /: listOfDoubles)((acc: List[Double], double: Double) => {(if (double.isNaN)
acc match {
case Nil => 0.0 // first double in the list
case _ => {
val last5 = acc.take(5)
(0.0 /: last5)(_ + _) / last5.size // in case there's only a last 1, 2, 3, or 4 instead of 5
}
}
else double) :: acc}).reverse
I'm running into a strange issue when trying to sum a list of doubles that are contained in different instances using foldLeft. Upon investigation, it seems that even when working with a list of simple doubles, the issue persists:
val listOfDoubles = List(4.0, 100.0, 1.0, 0.6, 8.58, 80.0, 22.33, 179.99, 8.3, 59.0, 0.6)
listOfDoubles.foldLeft(0.0) ((elem, res) => res + elem) // gives 464.40000000000003 instead of 464.40
What am I doing wrong here?
NOTE: foldLeft here is necessary as what I'm trying to achieve is a sum of doubles contained in different instances of a case class SomeClass(value: Double), unless, of course, there is another method to go about this.
What am I doing wrong here?
Using doubles when you need this kind of precision. It's not a problem with foldLeft. You're suffering from floating point rounding error. The problem is that some numbers need very long (or infinite) representations in binary, which means that they need to be truncated in binary form. And when converted back to decimal, that binary rounding error surfaces.
Use BigDecimal instead, as it is designed to for arbitrary precision.
scala> val list: List[BigDecimal] = List(4.0, 100.0, 1.0, 0.6, 8.58, 80.0, 22.33, 179.99, 8.3, 59.0, 0.6)
list: List[BigDecimal] = List(4.0, 100.0, 1.0, 0.6, 8.58, 80.0, 22.33, 179.99, 8.3, 59.0, 0.6)
scala> list.foldLeft(BigDecimal(0.0))(_ + _)
res3: scala.math.BigDecimal = 464.40
Or simply:
scala> list.sum
res4: BigDecimal = 464.40
I have a search query, that if it finds a match, I would like to push onto its 'vals' array, if it does not find a match then I would like to do an insert with the search query along with the newVals array
findDict = {a: 100, b: 250, c: 110}
newVals = [{x: 1, y: 2}, {x: 4, y:7]}
collection.update(findDict,{'$push': {'vals': newVals}}, upsert = True)
In this example above, if a match was found for findDict, then newVals would be pushed onto the existing vals array for the matching record.
If no match is found, I would like it to create a new record that looks like this:
{a: 100, b: 250, c: 110, vals: [{x: 1, y: 2}, {x: 4, y:7]}
I have to do this several million times, so I'm hoping to do it in the most optimal way. I also have many threads coming in and doing this at once so have to worry about concurrency. The update statement posted above almost seems to work, but it creates an entry like this for some reason if no match is found:
{a: 100, b: 250, c: 110, vals: [ [ {x: 1, y: 2}, {x: 4, y:7 ] ]}
note the array inside the array...
I currently have a unique combined index on a,b, and c. This can be changed if it will help somehow. I think I could do an update with upsert set to False, followed by an insert which will fail if the unique index exists... but it seems I would be doing each search twice in that case and killing my efficiency.
Have you tried using $push with $each?
collection.update(
findDict,
{'$push': {'vals': {'$each': newVals}}},
upsert = True
)
I have mongo db collection which looks like the following
collection {
X: 1,
Y: 2,
Z: 3,
T_update: 123,
T_publish: 243,
T_insert: 342
}
I have to create an index like
{X: 1, Y: 1, Z: 1, T_update: 1}
{X: 1, Y: 1, Z: 1, T_publish: 1}
{X: 1, Y: 1, Z: 1, T_insert: 1}
But what I see is that the value X: 1, Y:1, Z:1 will lead to redundancy and only time paramter which I intend to use for sorting is changing. Is there any better way to create the above indexes so that I do not ave to create three separate indexes.
Also say if I have index like
{X: 1, Y: 1, Z: 1, T_update: 1}
and I want Mongo to return result such that x = 5, y = any value, Z = 4, sort = T_update
will the above index be useful or should I create an index such as
{X:1, Z:1, T_update: 1},
I hope that I can avoid it.
The answer here is going to depend on the selectivity of the fields you are indexing - if the criteria you will be using to filter X, Y, or Z are not very selective then they can essentially be left out (or moved to the right of the compound key).
Let's say you are using a filter like Y is not equal to 1, where 1 is a rare value. Since you will be traversing almost the entire index to return most of the values, and scanning the data, having an index on Y will be of less benefit than having an index for the sort first. Given that scenario, if sorting on T_Update it would probably be beneficial to have an index like: {T_update: 1, Y : 1}.
In the end, there are lots and lots of permutations here in terms of what might be the most efficient way to index. The real way to figure out the best indexes for your data set is to use explain() and hint() to test the various indexes with your specific query pattern and data set.