While loop in CoffeeScript - coffeescript

I'm new to CoffeeScript and have been reading the book, The Little Book on CoffeeScript. Here are a few lines from the book's Chapter 2 which confused me while reading :
The only low-level loop that CoffeeScript exposes is the while loop. This has similar behavior to the while loop in pure JavaScript, but has the added advantage that it returns an array of results, i.e. like the Array.prototype.map() function.
num = 6
minstrel = while num -= 1
num + " Brave Sir Robin ran away"
Though it may look good for a CoffeeScript programmer, being a newbie, I'm unable to understand what the code does. Moreover, the words returns an array of results doesn't seem to go together with the fact that while is a loop construct, not a function. So the notion of it returning something seems confusing. Furthermore, the variable num with the string "Brave Sir Robin ran away" in every iteration of the loop seems to be awkward, as the value num is being used as the loop counter.
I would be thankful if you could explain the behavior of the code and perhaps illustrate what the author is trying to convey with simpler examples.

Wow! I didn't know that but it absolutely makes sense if you remember that Coffeescript always returns the last expression of a "block".
So in your case it returns (not via the "return" statement if that is what confuses you) the expression
num + " Brave Sir Robin ran away"
from the block associated with the while condition and as you will return multiple such expressions it pushes them on an array.
Have a look on the generated JavaScript and it might be clearer as the generated code is pretty much procedural
var minstrel, num;
num = 6;
minstrel = (function() {
var _results;
_results = [];
while (num -= 1) {
_results.push(num + " Brave Sir Robin ran away");
}
return _results;
})();
I hope that makes sense to you.

Beware, that function call can be very inefficient!
Below is a prime factors generator
'use strict'
exports.generate = (number) ->
return [] if number < 2
primes = []
candidate = 1
while number > 1
candidate++
while number % candidate is 0
primes.push candidate
number /= candidate
candidate = number - 1 if Math.sqrt(number) < candidate
primes
This is the version using while as expression
'use strict'
exports.generate = (number) ->
return [] if number < 2
candidate = 1
while number > 1
candidate++
primes = while number % candidate is 0
number /= candidate
candidate
candidate = number - 1 if Math.sqrt(number) < candidate
primes
First version ran my tests in 4 milliseconds, the last one takes 18 milliseconds. I believe the reason is the generated closure which returns the primes.

Related

"Find the Parity Outlier Code Wars (Scala)"

I am doing some of CodeWars challenges recently and I've got a problem with this one.
"You are given an array (which will have a length of at least 3, but could be very large) containing integers. The array is either entirely comprised of odd integers or entirely comprised of even integers except for a single integer N. Write a method that takes the array as an argument and returns this "outlier" N."
I've looked at some solutions, that are already on our website, but I want to solve the problem using my own approach.
The main problem in my code, seems to be that it ignores negative numbers even though I've implemented Math.abs() method in scala.
If you have an idea how to get around it, that is more than welcome.
Thanks a lot
object Parity {
var even = 0
var odd = 0
var result = 0
def findOutlier(integers: List[Int]): Int = {
for (y <- 0 until integers.length) {
if (Math.abs(integers(y)) % 2 == 0)
even += 1
else
odd += 1
}
if (even == 1) {
for (y <- 0 until integers.length) {
if (Math.abs(integers(y)) % 2 == 0)
result = integers(y)
}
} else {
for (y <- 0 until integers.length) {
if (Math.abs(integers(y)) % 2 != 0)
result = integers(y)
}
}
result
}
Your code handles negative numbers just fine. The problem is that you rely on mutable sate, which leaks between runs of your code. Your code behaves as follows:
val l = List(1,3,5,6,7)
println(Parity.findOutlier(l)) //6
println(Parity.findOutlier(l)) //7
println(Parity.findOutlier(l)) //7
The first run is correct. However, when you run it the second time, even, odd, and result all have the values from your previous run still in them. If you define them inside of your findOutlier method instead of in the Parity object, then your code gives correct results.
Additionally, I highly recommend reading over the methods available to a Scala List. You should almost never need to loop through a List like that, and there are a number of much more concise solutions to the problem. Mutable var's are also a pretty big red flag in Scala code, as are excessive if statements.

Merge Sort algorithm efficiency

I am currently taking an online algorithms course in which the teacher doesn't give code to solve the algorithm, but rather rough pseudo code. So before taking to the internet for the answer, I decided to take a stab at it myself.
In this case, the algorithm that we were looking at is merge sort algorithm. After being given the pseudo code we also dove into analyzing the algorithm for run times against n number of items in an array. After a quick analysis, the teacher arrived at 6nlog(base2)(n) + 6n as an approximate run time for the algorithm.
The pseudo code given was for the merge portion of the algorithm only and was given as follows:
C = output [length = n]
A = 1st sorted array [n/2]
B = 2nd sorted array [n/2]
i = 1
j = 1
for k = 1 to n
if A(i) < B(j)
C(k) = A(i)
i++
else [B(j) < A(i)]
C(k) = B(j)
j++
end
end
He basically did a breakdown of the above taking 4n+2 (2 for the declarations i and j, and 4 for the number of operations performed -- the for, if, array position assignment, and iteration). He simplified this, I believe for the sake of the class, to 6n.
This all makes sense to me, my question arises from the implementation that I am performing and how it effects the algorithms and some of the tradeoffs/inefficiencies it may add.
Below is my code in swift using a playground:
func mergeSort<T:Comparable>(_ array:[T]) -> [T] {
guard array.count > 1 else { return array }
let lowerHalfArray = array[0..<array.count / 2]
let upperHalfArray = array[array.count / 2..<array.count]
let lowerSortedArray = mergeSort(array: Array(lowerHalfArray))
let upperSortedArray = mergeSort(array: Array(upperHalfArray))
return merge(lhs:lowerSortedArray, rhs:upperSortedArray)
}
func merge<T:Comparable>(lhs:[T], rhs:[T]) -> [T] {
guard lhs.count > 0 else { return rhs }
guard rhs.count > 0 else { return lhs }
var i = 0
var j = 0
var mergedArray = [T]()
let loopCount = (lhs.count + rhs.count)
for _ in 0..<loopCount {
if j == rhs.count || (i < lhs.count && lhs[i] < rhs[j]) {
mergedArray.append(lhs[i])
i += 1
} else {
mergedArray.append(rhs[j])
j += 1
}
}
return mergedArray
}
let values = [5,4,8,7,6,3,1,2,9]
let sortedValues = mergeSort(values)
My questions for this are as follows:
Do the guard statements at the start of the merge<T:Comparable> function actually make it more inefficient? Considering we are always halving the array, the only time that it will hold true is for the base case and when there is an odd number of items in the array.
This to me seems like it would actually add more processing and give minimal return since the time that it happens is when we have halved the array to the point where one has no items.
Concerning my if statement in the merge. Since it is checking more than one condition, does this effect the overall efficiency of the algorithm that I have written? If so, the effects to me seems like they vary based on when it would break out of the if statement (e.g at the first condition or the second).
Is this something that is considered heavily when analyzing algorithms, and if so how do you account for the variance when it breaks out from the algorithm?
Any other analysis/tips you can give me on what I have written would be greatly appreciated.
You will very soon learn about Big-O and Big-Theta where you don't care about exact runtimes (believe me when I say very soon, like in a lecture or two). Until then, this is what you need to know:
Yes, the guards take some time, but it is the same amount of time in every iteration. So if each iteration takes X amount of time without the guard and you do n function calls, then it takes X*n amount of time in total. Now add in the guards who take Y amount of time in each call. You now need (X+Y)*n time in total. This is a constant factor, and when n becomes very large the (X+Y) factor becomes negligible compared to the n factor. That is, if you can reduce a function X*n to (X+Y)*(log n) then it is worthwhile to add the Y amount of work because you do fewer iterations in total.
The same reasoning applies to your second question. Yes, checking "if X or Y" takes more time than checking "if X" but it is a constant factor. The extra time does not vary with the size of n.
In some languages you only check the second condition if the first fails. How do we account for that? The simplest solution is to realize that the upper bound of the number of comparisons will be 3, while the number of iterations can be potentially millions with a large n. But 3 is a constant number, so it adds at most a constant amount of work per iteration. You can go into nitty-gritty details and try to reason about the distribution of how often the first, second and third condition will be true or false, but often you don't really want to go down that road. Pretend that you always do all the comparisons.
So yes, adding the guards might be bad for your runtime if you do the same number of iterations as before. But sometimes adding extra work in each iteration can decrease the number of iterations needed.

What does for i = 1 ; i <= power ; 1 += 1) { mean?

Sorry for pretty basic question.
Just starting out. Using flowgorithm to write code that gives out a calculation of exponential numbers.
The code to do it is:
function exponential(base, power) {
var answer;
answer = 1;
var i;
for (i = 1 ; i <= power ; i+= 1) {
answer = answer * base;
}
return answer;
f
then it loops up to the number of power. And i just understand that in the flowgorithm chart but i dont understand the code for it.
what does each section of the for statement mean?
i = 1 to power, i just need help understanding how it is written? What is the 1+= 1 bit?
Thanks.
The exponential function will take in 2 parameters, base and power.
You can create this function and call (fire) it when ever it is needed like so exponential(2,4).
The for (i = 1; 1 <= power; i+=1) is somewhat of an ugly for loop.
for loops traditionaly take three parameters. The first parameter in this case i =1 is the assignment parameter, the next one 1 <= power is the valadation parameter. So if we call the function like so...exponential(2,4) is i less than 4? The next parameter is an increment/decrement parameter. but this doesnt get executed until the code inside the for loop gets executed. Once the code inside the for loop is executed then this variable i adds 1 to itself so it is now 2. This is usefull because once i is no longer less than or equal to power it will exit the for loop. So in the case of exponential(2,4) once the code inside this for loop is ran 5 times it will exit the for loop because 6 > 5.
So if we look at a variable answer, we can see that before this for loop was called answer was equal to 1. After the first iteration of this for loop answer = answer times base. In the case of exponential(2,4) then answer equals 1 times 2, now answer =2. But we have only looped through the foor loop once , and like i said a for loop goes like (assignment, validator, "code inside the foor loop". then back up to increment/decrement). So since we to loop through this for loop 5 times in the case of exponential(2,4) it will look like so.
exponential(2,4)
answer = 1 * 2
now answer = 2
answer = 2 * 2
now answer = 4
answer = 4 * 2
now answer = 8
answer = 8 * 2
now answer = 16
answer = 16 * 2
now answer = 32
So if we could say... var int ans = exponential(2,4)
Then ans would equal 32 hence, the return answer; at the last line of your code.

In count change recursive algorithm, why do we return 1 if the amount = 0?

I am taking coursera course and,for an assignment, I have written a code to count the change of an amount given a list of denominations. A doing a lot of research, I found explanations of various algorithms. In the recursive implementation, one of the base cases is if the amount money is 0 then the count is 1. I don't understand why but this is the only way the code works. I feel that is the amount is 0 then there is no way to make change for it and I should throw an exception. The code look like:
function countChange(amount : Int, denoms :List[Int]) : Int = {
if (amount == 0 ) return 1 ....
Any explanation is much appreciated.
To avoid speaking specifically about the Coursera problem, I'll refer to a simpler but similar problem.
How many outcomes are there for 2 coin flips? 4.
(H,H),(H,T),(T,H),(T,T)
How many outcomes are there for 1 coin flip? 2.
(H),(T)
How many outcomes are there for 0 coin flips? 1.
()
Expressing this recursively, how many outcomes are there for N coin flips? Let's call it f(N) where
f(N) = 2 * f(N - 1), for N > 0
f(0) = 1
The N = 0 trivial (base) case is chosen so that the non-trivial cases, defined recursively, work out correctly. Since we're doing multiplication in this example and the identity element for multiplication is 1, it makes sense to choose that as the base case.
Alternatively, you could argue from combinatorics: n choose 0 = 1, 0! = 1, etc.

For loop in scala without sequence?

So, while working my way through "Scala for the Impatient" I found myself wondering: Can you use a Scala for loop without a sequence?
For example, there is an exercise in the book that asks you to build a counter object that cannot be incremented past Integer.MAX_VALUE. In order to test my solution, I wrote the following code:
var c = new Counter
for( i <- 0 to Integer.MAX_VALUE ) c.increment()
This throws an error: sequences cannot contain more than Int.MaxValue elements.
It seems to me that means that Scala is first allocating and populating a sequence object, with the values 0 through Integer.MaxValue, and then doing a foreach loop on that sequence object.
I realize that I could do this instead:
var c = new Counter
while(c.value < Integer.MAX_VALUE ) c.increment()
But is there any way to do a traditional C-style for loop with the for statement?
In fact, 0 to N does not actually populate anything with integers from 0 to N. It instead creates an instance of scala.collection.immutable.Range, which applies its methods to all the integers generated on the fly.
The error you ran into is only because you have to be able to fit the number of elements (whether they actually exist or not) into the positive part of an Int in order to maintain the contract for the length method. 1 to Int.MaxValue works fine, as does 0 until Int.MaxValue. And the latter is what your while loop is doing anyway (to includes the right endpoint, until omits it).
Anyway, since the Scala for is a very different (much more generic) creature than the C for, the short answer is no, you can't do exactly the same thing. But you can probably do what you want with for (though maybe not as fast as you want, since there is some performance penalty).
Wow, some nice technical answers for a simple question (which is good!) But in case anyone is just looking for a simple answer:
//start from 0, stop at 9 inclusive
for (i <- 0 until 10){
println("Hi " + i)
}
//or start from 0, stop at 9 inclusive
for (i <- 0 to 9){
println("Hi " + i)
}
As Rex pointed out, "to" includes the right endpoint, "until" omits it.
Yes and no, it depends what you are asking for. If you're asking whether you can iterate over a sequence of integers without having to build that sequence first, then yes you can, for instance using streams:
def fromTo(from : Int, to : Int) : Stream[Int] =
if(from > to) {
Stream.empty
} else {
// println("one more.") // uncomment to see when it is called
Stream.cons(from, fromTo(from + 1, to))
}
Then:
for(i <- fromTo(0, 5)) println(i)
Writing your own iterator by defining hasNext and next is another option.
If you're asking whether you can use the 'for' syntax to write a "native" loop, i.e. a loop that works by incrementing some native integer rather than iterating over values produced by an instance of an object, then the answer is, as far as I know, no. As you may know, 'for' comprehensions are syntactic sugar for a combination of calls to flatMap, filter, map and/or foreach (all defined in the FilterMonadic trait), depending on the nesting of generators and their types. You can try to compile some loop and print its compiler intermediate representation with
scalac -Xprint:refchecks
to see how they are expanded.
There's a bunch of these out there, but I can't be bothered googling them at the moment. The following is pretty canonical:
#scala.annotation.tailrec
def loop(from: Int, until: Int)(f: Int => Unit): Unit = {
if (from < until) {
f(from)
loop(from + 1, until)(f)
}
}
loop(0, 10) { i =>
println("Hi " + i)
}