I'm trying to write the Jenkins Hash (http://burtleburtle.net/bob/hash/doobs.html) in Scala. This bit is tricky...
switch(len) /* all the case statements fall through */
{
case 11: c+=((ub4)k[10]<<24);
case 10: c+=((ub4)k[9]<<16);
case 9 : c+=((ub4)k[8]<<8);
/* the first byte of c is reserved for the length */
case 8 : b+=((ub4)k[7]<<24);
case 7 : b+=((ub4)k[6]<<16);
case 6 : b+=((ub4)k[5]<<8);
case 5 : b+=k[4];
case 4 : a+=((ub4)k[3]<<24);
case 3 : a+=((ub4)k[2]<<16);
case 2 : a+=((ub4)k[1]<<8);
case 1 : a+=k[0];
/* case 0: nothing left to add */
}
match statements don't fall through in Scala (which is an excellent design decision). So best way to do this I can think of is to have one if statement per case. I'm hoping someone can see a more pithy solution…
Something like this perhaps?
val (a,b,c) = k
.zipWithIndex
.foldLeft((0,0,0)) { case ((a,b,c), (v, index)) =>
index match {
case i if i < 4 => (a + (v << i*8), b, c)
case i if i < 8 => (a, b + (v << (i-4)*8), b, c)
case i => (a, b, c + (v << (i-7)*8))
}
}
I haven't tested the result, but this is one way to implement it. Drop and take the bytes of the array needed for each value. Also fold from right in order to rotate the rightmost value from 0 to 8 to 16 to 24.
Working with unsigned in JVM is a bit cumbersome and requires that we transform it to a wider data type, i.e. byte to int and int to long). Therefore we use longify to convert to byte to long. We also need to account for half the byte values being negative.
val longify = ((n: Byte) => n.toLong) andThen (n => if (n < 0) n + 256 else n)
def sum(m: Int, n: Int) =
k.drop(m).take(n).foldRight(0L)((b, s) => longify(b) + (s << 8))
val a = sum(0, 4)
val b = sum(4, 4)
val c = (sum(9, 3) << 8) + k.size
Related
I am new to scala and want to write a code that add two numbers represented by linked list in scala as per the below given example
Input:
First List: 5->6->3 // represents number 365
Second List: 8->4->2 // represents number 248
Output
Resultant list: 3->1->6 // represents number 613
I have implemented a code of mutable singly linked list in scala for adding,updating and inserting elements to linked list. Find my code below
class SinglyLinkedList[A] extends ListADT[A] {
private class Node(var data: A,var next: Node)
private var head: Node = null
def apply(index: Int): A = {
require(index >= 0)
var rover = head
for (i <- 0 until index) rover = rover.next
rover.data
}
def update(index: Int,data: A): Unit = {
require(index >= 0)
var rover = head
for (i <- 0 until index) rover = rover.next
rover.data = data
}
def insert(index: Int,data: A): Unit = {
require(index >= 0)
if(index == 0) {
head = new Node(data, head)
}
else{
var rover = head
for (i <- 0 until index-1)
rover = rover.next
rover.next = new Node(data, rover.next)
}
}
def remove(index: Int): A = {
require(index >= 0)
if(index == 0){
val ret = head.data
head = head.next
ret
} else {
var rover = head
for (i <- 0 until index-1) rover = rover.next
val ret = rover.next.data
rover.next = rover.next.next
ret
}
}
}
Can anyone let me know how I am going to perform the addition of two numbers represented by linked list.
How does addition works? I mean the addition on paper: one number under the other?
Let's try for 465 + 248
465
+ 248
---
We start with the least significant digits: 5 + 8. But 5 + 8 = 13, so the result won't fit into a single digit. Which is why we do just like a teacher in preschool taught us: we leave the unit digit and carry the tens digit to the next column
1
465
+ 248
---
3
Now tens. 6 + 4 + (carried) 1 = 11. Again, we leave 1 and carry 1 to the next column:
11
465
+ 248
---
13
And the last column. 4 + 2 + 1 = 7.
11
465
+ 248
---
713
Thus result is 713. If one these 2 numbers have more column or you would carry in the last addition, you could just rewrite remaining numbers.
With immutable liked list it would work the same way (I'll explain in a moment why I used immutable):
take both lists
take heads of both lists (if one of them is empty, you can just return the other as a result of addition)
add heads, and split the result into carry and current digit (carry would be 0 or 0, digit 0 to 9)
if there is carry > 0 add list carry :: Nil to one of tails recursively
prepend digit to recursively added tails
You should end up with something like that:
val add: (List[Int], List[Int]) => List[Int] = {
case (a :: as, b :: bs) =>
val digit = (a + b) % 10
val carry = (a + b) / 10
if (carry > 0) digit :: add(add(as, carry :: Nil), bs)
else digit :: add(as, bs)
case (as, Nil) => as
case (Nil, bs) => bs
}
add(5 :: 6 :: 4 :: Nil, 8 :: 4 :: 2 :: Nil) // 3 :: 1 :: 7 :: Nil
Now, if you would use mutable list it would get trickier. If you want to use mutable list you want to update one of them, right? Which one - first? Second? Both? Your algorithm might calculate the right result but butcher the input.
Let's say you always add the second list to the fist one, and you want to leave the second intact. If the second list is longer, and you would have to add some new places for digits, you have to copy all remaining segments (otherwise you could e.g. update one number in second list and change the first one). You would also have to handle the corner case with carry.
Quite counter-intuitive behavior - numbers are not mutable, and you want to represent numbers.
Try this:
def add(a: List[Int], b: List[Int], o: Int): List[Int] = (a,b,o) match {
case (x::xs, y::ys, d) =>
val v = d + x + y
(v%10)::add(xs, ys, v/10)
case (Nil, Nil, 0) => Nil
case (Nil, Nil, d) => d::Nil
case (xs, Nil, d) => add(xs, 0::Nil, d)
case (Nil, ys, d) => add(0::Nil, ys, d)
}
I want to rewrite a recursive function using pattern matching instead of if-else statements, but I am getting (correct) warning messages that some parts of the code are unreachable. In fact, I am getting wrong logic evaluation.
The function I am trying to re-write is:
def pascal(c: Int, r: Int): Int =
if (c == 0)
1
else if (c == r)
1
else
pascal(c - 1, r - 1) + pascal(c, r - 1)
This function works as expected. I re-wrote it as follows using pattern matching but now the function is not working as expected:
def pascal2 (c : Int, r : Int) : Int = c match {
case 0 => 1
case r => 1
case _ => pascal2(c - 1, r - 1) + pascal2(c, r - 1)
}
Where am I going wrong?
Main:
println("Pascal's Triangle")
for (row <- 0 to 10) {
for (col <- 0 to row)
print(pascal(col, row) + " ")
println()
}
The following statement "shadows" the variable r:
case r =>
That is to say, the "r" in that case statement is not, in fact, the "r" that you have defined above. It is it's own "r" which is equivalently equal to "c" because you are telling Scala to assign any value to some variable named "r."
Hence, what you really want is:
def pascal2(c: Int, r: Int): Int = c match{
case 0 => 1
case _ if c == r => 1
case _ => pascal2(c-1, r-1) + pascal2(c, r-1)
}
This is not, however tail recursive.
I fully agree with #wheaties and advice you to follow his directions. For sake of completeness I want to point out a few alternatives.
Alternative 1
You could write your own unapply:
def pascal(c: Int, r: Int): Int = {
object MatchesBoundary {
def unapply(i: Int) = if (i==0 || i==r) Some(i) else None
}
c match {
case MatchesBoundary(_) => 1
case _ => pascal(c-1, r-1) + pascal(c, r-1)
}
}
I would not claim that it improves readability in this case a lot. Just want to show the possibility to combine semantically similar cases (with identical/similar case bodies), which may be useful in more complex examples.
Alternative 2
There is also a possible solution, which exploits the fact that Scala's syntax for pattern matching only treats lower case variables as variables for a match. The following example shows what I mean by that:
def pascal(c: Int, r: Int): Int = {
val BoundaryL = 0
val BoundaryR = r
c match {
case BoundaryL => 1
case BoundaryR => 1
case _ => pascal(c-1, r-1) + pascal(c, r-1)
}
}
Since BoundaryL and BoundaryR start with upper case letters they are not treated as variables, but are used directly as matching object. Therefore the above works (while changing them to boundaryL and boundaryR would not, which btw also gives compiler warnings). This means that you could get your example to work simply by replacing r by R. Since this is a rather ugly solution I mention it only for educational purposes.
I was wondering if there is some general method to convert a "normal" recursion with foo(...) + foo(...) as the last call to a tail-recursion.
For example (scala):
def pascal(c: Int, r: Int): Int = {
if (c == 0 || c == r) 1
else pascal(c - 1, r - 1) + pascal(c, r - 1)
}
A general solution for functional languages to convert recursive function to a tail-call equivalent:
A simple way is to wrap the non tail-recursive function in the Trampoline monad.
def pascalM(c: Int, r: Int): Trampoline[Int] = {
if (c == 0 || c == r) Trampoline.done(1)
else for {
a <- Trampoline.suspend(pascal(c - 1, r - 1))
b <- Trampoline.suspend(pascal(c, r - 1))
} yield a + b
}
val pascal = pascalM(10, 5).run
So the pascal function is not a recursive function anymore. However, the Trampoline monad is a nested structure of the computation that need to be done. Finally, run is a tail-recursive function that walks through the tree-like structure, interpreting it, and finally at the base case returns the value.
A paper from Rúnar Bjanarson on the subject of Trampolines: Stackless Scala With Free Monads
In cases where there is a simple modification to the value of a recursive call, that operation can be moved to the front of the recursive function. The classic example of this is Tail recursion modulo cons, where a simple recursive function in this form:
def recur[A](...):List[A] = {
...
x :: recur(...)
}
which is not tail recursive, is transformed into
def recur[A]{...): List[A] = {
def consRecur(..., consA: A): List[A] = {
consA :: ...
...
consrecur(..., ...)
}
...
consrecur(...,...)
}
Alexlv's example is a variant of this.
This is such a well known situation that some compilers (I know of Prolog and Scheme examples but Scalac does not do this) can detect simple cases and perform this optimisation automatically.
Problems combining multiple calls to recursive functions have no such simple solution. TMRC optimisatin is useless, as you are simply moving the first recursive call to another non-tail position. The only way to reach a tail-recursive solution is remove all but one of the recursive calls; how to do this is entirely context dependent but requires finding an entirely different approach to solving the problem.
As it happens, in some ways your example is similar to the classic Fibonnaci sequence problem; in that case the naive but elegant doubly-recursive solution can be replaced by one which loops forward from the 0th number.
def fib (n: Long): Long = n match {
case 0 | 1 => n
case _ => fib( n - 2) + fib( n - 1 )
}
def fib (n: Long): Long = {
def loop(current: Long, next: => Long, iteration: Long): Long = {
if (n == iteration)
current
else
loop(next, current + next, iteration + 1)
}
loop(0, 1, 0)
}
For the Fibonnaci sequence, this is the most efficient approach (a streams based solution is just a different expression of this solution that can cache results for subsequent calls). Now,
you can also solve your problem by looping forward from c0/r0 (well, c0/r2) and calculating each row in sequence - the difference being that you need to cache the entire previous row. So while this has a similarity to fib, it differs dramatically in the specifics and is also significantly less efficient than your original, doubly-recursive solution.
Here's an approach for your pascal triangle example which can calculate pascal(30,60) efficiently:
def pascal(column: Long, row: Long):Long = {
type Point = (Long, Long)
type Points = List[Point]
type Triangle = Map[Point,Long]
def above(p: Point) = (p._1, p._2 - 1)
def aboveLeft(p: Point) = (p._1 - 1, p._2 - 1)
def find(ps: Points, t: Triangle): Long = ps match {
// Found the ultimate goal
case (p :: Nil) if t contains p => t(p)
// Found an intermediate point: pop the stack and carry on
case (p :: rest) if t contains p => find(rest, t)
// Hit a triangle edge, add it to the triangle
case ((c, r) :: _) if (c == 0) || (c == r) => find(ps, t + ((c,r) -> 1))
// Triangle contains (c - 1, r - 1)...
case (p :: _) if t contains aboveLeft(p) => if (t contains above(p))
// And it contains (c, r - 1)! Add to the triangle
find(ps, t + (p -> (t(aboveLeft(p)) + t(above(p)))))
else
// Does not contain(c, r -1). So find that
find(above(p) :: ps, t)
// If we get here, we don't have (c - 1, r - 1). Find that.
case (p :: _) => find(aboveLeft(p) :: ps, t)
}
require(column >= 0 && row >= 0 && column <= row)
(column, row) match {
case (c, r) if (c == 0) || (c == r) => 1
case p => find(List(p), Map())
}
}
It's efficient, but I think it shows how ugly complex recursive solutions can become as you deform them to become tail recursive. At this point, it may be worth moving to a different model entirely. Continuations or monadic gymnastics might be better.
You want a generic way to transform your function. There isn't one. There are helpful approaches, that's all.
I don't know how theoretical this question is, but a recursive implementation won't be efficient even with tail-recursion. Try computing pascal(30, 60), for example. I don't think you'll get a stack overflow, but be prepared to take a long coffee break.
Instead, consider using a Stream or memoization:
val pascal: Stream[Stream[Long]] =
(Stream(1L)
#:: (Stream from 1 map { i =>
// compute row i
(1L
#:: (pascal(i-1) // take the previous row
sliding 2 // and add adjacent values pairwise
collect { case Stream(a,b) => a + b }).toStream
++ Stream(1L))
}))
The accumulator approach
def pascal(c: Int, r: Int): Int = {
def pascalAcc(acc:Int, leftover: List[(Int, Int)]):Int = {
if (leftover.isEmpty) acc
else {
val (c1, r1) = leftover.head
// Edge.
if (c1 == 0 || c1 == r1) pascalAcc(acc + 1, leftover.tail)
// Safe checks.
else if (c1 < 0 || r1 < 0 || c1 > r1) pascalAcc(acc, leftover.tail)
// Add 2 other points to accumulator.
else pascalAcc(acc, (c1 , r1 - 1) :: ((c1 - 1, r1 - 1) :: leftover.tail ))
}
}
pascalAcc(0, List ((c,r) ))
}
It does not overflow the stack but as on big row and column but Aaron mentioned it's not fast.
Yes it's possible. Usually it's done with accumulator pattern through some internally defined function, which has one additional argument with so called accumulator logic, example with counting length of a list.
For example normal recursive version would look like this:
def length[A](xs: List[A]): Int = if (xs.isEmpty) 0 else 1 + length(xs.tail)
that's not a tail recursive version, in order to eliminate last addition operation we have to accumulate values while somehow, for example with accumulator pattern:
def length[A](xs: List[A]) = {
def inner(ys: List[A], acc: Int): Int = {
if (ys.isEmpty) acc else inner(ys.tail, acc + 1)
}
inner(xs, 0)
}
a bit longer to code, but i think the idea i clear. Of cause you can do it without inner function, but in such case you should provide acc initial value manually.
I'm pretty sure it's not possible in the simple way you're looking for the general case, but it would depend on how elaborate you permit the changes to be.
A tail-recursive function must be re-writable as a while-loop, but try implementing for example a Fractal Tree using while-loops. It's possble, but you need to use an array or collection to store the state for each point, which susbstitutes for the data otherwise stored in the call-stack.
It's also possible to use trampolining.
It is indeed possible. The way I'd do this is to
begin with List(1) and keep recursing till you get to the
row you want.
Worth noticing that you can optimize it: if c==0 or c==r the value is one, and to calculate let's say column 3 of the 100th row you still only need to calculate the first three elements of the previous rows.
A working tail recursive solution would be this:
def pascal(c: Int, r: Int): Int = {
#tailrec
def pascalAcc(c: Int, r: Int, acc: List[Int]): List[Int] = {
if (r == 0) acc
else pascalAcc(c, r - 1,
// from let's say 1 3 3 1 builds 0 1 3 3 1 0 , takes only the
// subset that matters (if asking for col c, no cols after c are
// used) and uses sliding to build (0 1) (1 3) (3 3) etc.
(0 +: acc :+ 0).take(c + 2)
.sliding(2, 1).map { x => x.reduce(_ + _) }.toList)
}
if (c == 0 || c == r) 1
else pascalAcc(c, r, List(1))(c)
}
The annotation #tailrec actually makes the compiler check the function
is actually tail recursive.
It could be probably be further optimized since given that the rows are symmetric, if c > r/2, pascal(c,r) == pascal ( r-c,r).. but left to the reader ;)
Given n ( say 3 people ) and s ( say 100$ ), we'd like to partition s among n people.
So we need all possible n-tuples that sum to s
My Scala code below:
def weights(n:Int,s:Int):List[List[Int]] = {
List.concat( (0 to s).toList.map(List.fill(n)(_)).flatten, (0 to s).toList).
combinations(n).filter(_.sum==s).map(_.permutations.toList).toList.flatten
}
println(weights(3,100))
This works for small values of n. ( n=1, 2, 3 or 4).
Beyond n=4, it takes a very long time, practically unusable.
I'm looking for ways to rework my code using lazy evaluation/ Stream.
My requirements : Must work for n upto 10.
Warning : The problem gets really big really fast. My results from Matlab -
---For s =100, n = 1 thru 5 results are ---
n=1 :1 combinations
n=2 :101 combinations
n=3 :5151 combinations
n=4 :176851 combinations
n=5: 4598126 combinations
---
You need dynamic programming, or memoization. Same concept, anyway.
Let's say you have to divide s among n. Recursively, that's defined like this:
def permutations(s: Int, n: Int): List[List[Int]] = n match {
case 0 => Nil
case 1 => List(List(s))
case _ => (0 to s).toList flatMap (x => permutations(s - x, n - 1) map (x :: _))
}
Now, this will STILL be slow as hell, but there's a catch here... you don't need to recompute permutations(s, n) for numbers you have already computed. So you can do this instead:
val memoP = collection.mutable.Map.empty[(Int, Int), List[List[Int]]]
def permutations(s: Int, n: Int): List[List[Int]] = {
def permutationsWithHead(x: Int) = permutations(s - x, n - 1) map (x :: _)
n match {
case 0 => Nil
case 1 => List(List(s))
case _ =>
memoP getOrElseUpdate ((s, n),
(0 to s).toList flatMap permutationsWithHead)
}
}
And this can be even further improved, because it will compute every permutation. You only need to compute every combination, and then permute that without recomputing.
To compute every combination, we can change the code like this:
val memoC = collection.mutable.Map.empty[(Int, Int, Int), List[List[Int]]]
def combinations(s: Int, n: Int, min: Int = 0): List[List[Int]] = {
def combinationsWithHead(x: Int) = combinations(s - x, n - 1, x) map (x :: _)
n match {
case 0 => Nil
case 1 => List(List(s))
case _ =>
memoC getOrElseUpdate ((s, n, min),
(min to s / 2).toList flatMap combinationsWithHead)
}
}
Running combinations(100, 10) is still slow, given the sheer numbers of combinations alone. The permutations for each combination can be obtained simply calling .permutation on the combination.
Here's a quick and dirty Stream solution:
def weights(n: Int, s: Int) = (1 until s).foldLeft(Stream(Nil: List[Int])) {
(a, _) => a.flatMap(c => Stream.range(0, n - c.sum + 1).map(_ :: c))
}.map(c => (n - c.sum) :: c)
It works for n = 6 in about 15 seconds on my machine:
scala> var x = 0
scala> weights(100, 6).foreach(_ => x += 1)
scala> x
res81: Int = 96560646
As a side note: by the time you get to n = 10, there are 4,263,421,511,271 of these things. That's going to take days just to stream through.
My solution of this problem, it can computer n till 6:
object Partition {
implicit def i2p(n: Int): Partition = new Partition(n)
def main(args : Array[String]) : Unit = {
for(n <- 1 to 6) println(100.partitions(n).size)
}
}
class Partition(n: Int){
def partitions(m: Int):Iterator[List[Int]] = new Iterator[List[Int]] {
val nums = Array.ofDim[Int](m)
nums(0) = n
var hasNext = m > 0 && n > 0
override def next: List[Int] = {
if(hasNext){
val result = nums.toList
var idx = 0
while(idx < m-1 && nums(idx) == 0) idx = idx + 1
if(idx == m-1) hasNext = false
else {
nums(idx+1) = nums(idx+1) + 1
nums(0) = nums(idx) - 1
if(idx != 0) nums(idx) = 0
}
result
}
else Iterator.empty.next
}
}
}
1
101
5151
176851
4598126
96560646
However , we can just show the number of the possible n-tuples:
val pt: (Int,Int) => BigInt = {
val buf = collection.mutable.Map[(Int,Int),BigInt]()
(s,n) => buf.getOrElseUpdate((s,n),
if(n == 0 && s > 0) BigInt(0)
else if(s == 0) BigInt(1)
else (0 to s).map{k => pt(s-k,n-1)}.sum
)
}
for(n <- 1 to 20) printf("%2d :%s%n",n,pt(100,n).toString)
1 :1
2 :101
3 :5151
4 :176851
5 :4598126
6 :96560646
7 :1705904746
8 :26075972546
9 :352025629371
10 :4263421511271
11 :46897636623981
12 :473239787751081
13 :4416904685676756
14 :38393094575497956
15 :312629484400483356
16 :2396826047070372396
17 :17376988841260199871
18 :119594570260437846171
19 :784008849485092547121
20 :4910371215196105953021
In a for-comprehension, I can't just put a print statement:
def prod (m: Int) = {
for (a <- 2 to m/(2*3);
print (a + " ");
b <- (a+1) to m/a;
c = (a*b)
if (c < m)) yield c
}
but I can circumvent it easily with a dummy assignment:
def prod (m: Int) = {
for (a <- 2 to m/(2*3);
dummy = print (a + " ");
b <- (a+1) to m/a;
c = (a*b)
if (c < m)) yield c
}
Being a side effect, and only used (so far) in code under development, is there a better ad hoc solution?
Is there a serious problem why I shouldn't use it, beside being a side effect?
update showing the real code, where adapting one solution is harder than expected:
From the discussion with Rex Kerr, the necessity has risen to show the original code, which is a bit more complicated, but did not seem to be relevant for the question (2x .filter, calling a method in the end), but when I tried to apply Rex' pattern to it I failed, so I post it here:
def prod (p: Array[Boolean], max: Int) = {
for (a <- (2 to max/(2*3)).
filter (p);
dummy = print (a + " ");
b <- (((a+1) to max/a).
filter (p));
if (a*b <= max))
yield (em (a, b, max)) }
Here is my attempt -- (b * a).filter is wrong, because the result is an int, not a filterable collection of ints:
// wrong:
def prod (p: Array[Boolean], max: Int) = {
(2 to max/(2*3)).filter (p).flatMap { a =>
print (a + " ")
((a+1) to max/a).filter (p). map { b =>
(b * a).filter (_ <= max).map (em (a, b, max))
}
}
}
Part II belongs to the comments, but can't be read, if written there - maybe I delete it in the end. Please excuse.
Ok - here is Rex last answer in code layout:
def prod (p: Array[Boolean], max: Int) = {
(2 to max/(2*3)).filter (p).flatMap { a =>
print (a + " ")
((a+1) to max/a).filter (b => p (b)
&& b * a < max).map { b => (m (a, b, max))
}
}
}
This is how you need to write it:
scala> def prod(m: Int) = {
| for {
| a <- 2 to m / (2 * 3)
| _ = print(a + " ")
| b <- (a + 1) to (m / a)
| c = a * b
| if c < m
| } yield c
| }
prod: (m: Int)scala.collection.immutable.IndexedSeq[Int]
scala> prod(20)
2 3 res159: scala.collection.immutable.IndexedSeq[Int] = Vector(6, 8, 10, 12, 14
, 16, 18, 12, 15, 18)
Starting Scala 2.13, the chaining operation tap, has been included in the standard library, and can be used with minimum intrusiveness wherever we need to print some intermediate state of a pipeline:
import util.chaining._
def prod(m: Int) =
for {
a <- 2 to m / (2 * 3)
b <- (a + 1) to (m / a.tap(println)) // <- a.tap(println)
c = a * b
if c < m
} yield c
prod(20)
// 2
// 3
// res0: IndexedSeq[Int] = Vector(6, 8, 10, 12, 14, 16, 18, 12, 15, 18)
The tap chaining operation applies a side effect (in this case println) on a value (in this case a) while returning the value (a) untouched:
def tap[U](f: (A) => U): A
It's very convenient when debugging as you can use a bunch of taps without having to modify the code:
def prod(m: Int) =
for {
a <- (2 to m.tap(println) / (2 * 3)).tap(println)
b <- (a + 1) to (m / a.tap(println))
c = (a * b).tap(println)
if c < m
} yield c
I generally find that style of coding rather difficult to follow, since loops and intermediate results and such get all mixed in with each other. I would, instead of a for loop, write something like
def prod(m: Int) = {
(2 to m/(2*3)).flatMap { a =>
print(a + " ")
((a+1) to m/a).map(_ * a).filter(_ < m)
}
}
This also makes adding print statements and such easier.
It doesn't seem like good style to put a side-effecting statement within a for-comprehension (or indeed in the middle of any function), execept for debugging in which case it doesn't really matter what you call it ("debug" seems like a good name).
If you really need to, I think you'd be better separating your concerns somewhat by assigning an intermediate val, e.g. (your original laid out more nicely):
def prod (p: Array[Boolean], max: Int) = {
for {
a <- (2 to max / (2 * 3)) filter p
debug = print (a + " ")
b <- ((a + 1) to max / a) filter p
if a * b <= max
} yield em(a, b, max)
}
becomes
def prod2 (p: Array[Boolean], max: Int) = {
val as = (2 to max / (2 * 3)) filter p
for(a <- as) print(a + " ")
as flatMap {a =>
for {
b <- ((a + 1) to max / a) filter p
if a * b <= max
} yield em(a, b, max)
}
}