I have to write a method "all()" which returns a list of tuples; each tuple will contain the row, column and set relevant to a particular given row and column, when the function meets a 0 in the list. I already have written the "hyp" function which returns the set part I need, eg: Set(1,2). I am using a list of lists:
| 0 | 0 | 9 |
| 0 | x | 0 |
| 7 | 0 | 8 |
If Set (1,2) are referring to the cell marked as x, all() should return: (1,1, Set(1,2)) where 1,1 are the index of the row and column.
I wrote this method by using zipWithIndex. Is there any simpler way how to access an index as in this case without zipWithIndex? Thanks in advance
Code:
def all(): List[(Int, Int, Set[Int])] =
{
puzzle.list.zipWithIndex flatMap
{
rowAndIndex =>
rowAndIndex._1.zipWithIndex.withFilter(_._1 == 0) map
{
colAndIndex =>
(rowAndIndex._2, colAndIndex._2, hyp(rowAndIndex._2, colAndIndex._2))
}
}
}
The (_._1 == 0 ) is because the function has to return the (Int,Int, Set()) only when it finds a 0 in the grid
It's fairly common to use zipWithIndex. Can minimise wrestling with Tuples/Pairs through pattern matching vars within the tuple:
def all(grid: List[List[Int]]): List[(Int, Int, Set[Int])] =
grid.zipWithIndex flatMap {case (row, r) =>
row.zipWithIndex.withFilter(_._1 == 0) map {case (col, c) => (r, c, hyp(r, c))}
}
Can be converted to a 100% equivalent for-comprehension:
def all(grid: List[List[Int]]): List[(Int, Int, Set[Int])] =
for {(row, r) <- grid.zipWithIndex;
(col, c) <- row.zipWithIndex if (col == 0)} yield (r, c, hyp(r, c))
Both of above produce the same compiled code.
Note that your requirement means that all solutions are minimum O(n) = O(r*c) - you must visit each and every cell. However the overall behaviour of user60561's answer is O(n^2) = O((r*c)^2): for each cell, there's an O(n) lookup in list(x)(y):
for{ x <- list.indices
y <- list(0).indices
if list(x)(y) == 0 } yield (x, y, hyp(x, y))
Here's similar (imperative!) logic, but O(n):
var r, c = -1
for{ row <- list; col <- row if col == 0} yield {
r += 1
c += 1
(r, c, hyp(r, c))
}
Recursive version (uses results-accumulator to enable tail-recursion):
type Grid = List[List[Int]]
type GridHyp = List[(Int, Int, Set[Int])]
def all(grid: Grid): GridHyp = {
def rowHypIter(row: List[Int], r: Int, c: Int, accum: GridHyp) = row match {
case Nil => accum
case col :: othCols => rowHypIter(othCols, r, c + 1, hyp(r, c) :: accum)}
def gridHypIter(grid: Grid, r: Int, accum: GridHyp) = grid match {
case Nil => accum
case row :: othRows => gridHypIter(othRows, r + 1, rowHyp(row, r, 0, accum))}
gridHypIter(grid, 0, Nil)
}
'Monadic' logic (flatmap/map/withFilter OR equivalent for-comprehensions) is often/usually neater than recursion + pattern-matching - evident here.
The simplest way I can think of is just a classic for loop:
for{ x <- list.indices
y <- list(0).indices
if list(x)(y) == 0 } yield (x, y, hyp(x, y))
It assumes that your second dimension is of an uniform size. With this code, I would also recommend you use an Array or Vector if your grid sizes are larger then 100 or so because list(x)(y) is a O(n) operation.
Related
I want to generate a list of Tuple2 objects. Each tuple (a,b) in the list should meeting the conditions:a and b both are perfect squares,(b/30)<a<b
and a>N and b>N ( N can even be a BigInt)
I am trying to write a scala function to generate the List of Tuples meeting the above requirements?
This is my attempt..it works fine for Ints and Longs..But for BigInt there is sqrt problem I am facing..Here is my approach in coding as below:
scala> def genTups(N:Long) ={
| val x = for(s<- 1L to Math.sqrt(N).toLong) yield s*s;
| val y = x.combinations(2).map{ case Vector(a,b) => (a,b)}.toList
| y.filter(t=> (t._1*30/t._2)>=1)
| }
genTups: (N: Long)List[(Long, Long)]
scala> genTups(30)
res32: List[(Long, Long)] = List((1,4), (1,9), (1,16), (1,25), (4,9), (4,16), (4,25), (9,16), (9,25), (16,25))
Improved this using BigInt square-root algorithm as below:
def genTups(N1:BigInt,N2:BigInt) ={
def sqt(n:BigInt):BigInt = {
var a = BigInt(1)
var b = (n>>5)+BigInt(8)
while((b-a) >= 0) {
var mid:BigInt = (a+b)>>1
if(mid*mid-n> 0) b = mid-1
else a = mid+1
}; a-1 }
val x = for(s<- sqt(N1) to sqt(N2)) yield s*s;
val y = x.combinations(2).map{ case Vector(a,b) => (a,b)}.toList
y.filter(t=> (t._1*30/t._2)>=1)
}
I appreciate any help to improve in my algorithm .
You can avoid sqrt in you algorithm by changing the way you calculate x to this:
val x = (BigInt(1) to N).map(x => x*x).takeWhile(_ <= N)
The final function is then:
def genTups(N: BigInt) = {
val x = (BigInt(1) to N).map(x => x*x).takeWhile(_ <= N)
val y = x.combinations(2).map { case Vector(a, b) if (a < b) => (a, b) }.toList
y.filter(t => (t._1 * 30 / t._2) >= 1)
}
You can also re-write this as a single chain of operations like this:
def genTups(N: BigInt) =
(BigInt(1) to N)
.map(x => x * x)
.takeWhile(_ <= N)
.combinations(2)
.map { case Vector(a, b) if a < b => (a, b) }
.filter(t => (t._1 * 30 / t._2) >= 1)
.toList
In a quest for performance, I came up with this recursive version that appears to be significantly faster
def genTups(N1: BigInt, N2: BigInt) = {
def sqt(n: BigInt): BigInt = {
var a = BigInt(1)
var b = (n >> 5) + BigInt(8)
while ((b - a) >= 0) {
var mid: BigInt = (a + b) >> 1
if (mid * mid - n > 0) {
b = mid - 1
} else {
a = mid + 1
}
}
a - 1
}
#tailrec
def loop(a: BigInt, rem: List[BigInt], res: List[(BigInt, BigInt)]): List[(BigInt, BigInt)] =
rem match {
case Nil => res
case head :: tail =>
val a30 = a * 30
val thisRes = rem.takeWhile(_ <= a30).map(b => (a, b))
loop(head, tail, thisRes.reverse ::: res)
}
val squares = (sqt(N1) to sqt(N2)).map(s => s * s).toList
loop(squares.head, squares.tail, Nil).reverse
}
Each recursion of the loop adds all the matching pairs for a given value of a. The result is built in reverse because adding to the front of a long list is much faster than adding to the tail.
Firstly create a function to check if number if perfect square or not.
def squareRootOfPerfectSquare(a: Int): Option[Int] = {
val sqrt = math.sqrt(a)
if (sqrt % 1 == 0)
Some(sqrt.toInt)
else
None
}
Then, create another func that will calculate this list of tuples according to the conditions mentioned above.
def generateTuples(n1:Int,n2:Int)={
for{
b <- 1 to n2;
a <- 1 to n1 if(b>a && squareRootOfPerfectSquare(b).isDefined && squareRootOfPerfectSquare(a).isDefined)
} yield ( (a,b) )
}
Then on calling the function with parameters generateTuples(5,10)
you will get an output as
res0: scala.collection.immutable.IndexedSeq[(Int, Int)] = Vector((1,4), (1,9), (4,9))
Hope that helps !!!
I am new to Scala and was trying the Selection Sort algorithm. I managed to do a min sort but when I try to do the max sort I get a sorted array but in decreasing order. My code is:
def maxSort(a:Array[Double]):Unit = {
for(i <- 0 until a.length-1){
var min = i
for(j <- i + 1 until a.length){
if (a(j) < a(min)) min = j
}
val tmp = a(i)
a(i) = a(min)
a(min) = tmp
}
}
I know that I have to append my result at the end of the array, but how do I do that?
This code will sort the array using the maximum in increasing order:
def maxSort(a:Array[Double]):Unit = {
for (i <- (0 until a.length).reverse) {
var max = i
for (j <- (0 until i).reverse) {
if (a(j) > a(max)) max = j
}
val tmp = a(i)
a(i) = a(max)
a(max) = tmp
}
}
The main issue here is iterating through the array in reverse order, more solutions are provided here:
Scala downwards or decreasing for loop?
Please note that Scala is praised for it's functional features and functional approach might be more interesting and "in the style of language". Here are some examples of Selection Sort:
Selection sort in functional Scala
Select Sort in functional style:
def selectionSort(source: List[Int]) = {
def select(source: List[Int], result: List[Int]) : List[Int] = source match {
case h :: t => sort(t, Nil, result, h)
case Nil => result
}
#tailrec
def sort(source: List[Int], r1: List[Int], r2: List[Int], m: Int) : List[Int] = source match {
case h :: t => if( h > m) sort(t, h :: r1, r2, m) else sort(t, m :: r1, r2, h)
case Nil => select(r1, r2 :+ m)
}
select(source, Nil)
}
In Scala language, I want to write a function that yields odd numbers within a given range. The function prints some log when iterating even numbers. The first version of the function is:
def getOdds(N: Int): Traversable[Int] = {
val list = new mutable.MutableList[Int]
for (n <- 0 until N) {
if (n % 2 == 1) {
list += n
} else {
println("skip even number " + n)
}
}
return list
}
If I omit printing logs, the implementation become very simple:
def getOddsWithoutPrint(N: Int) =
for (n <- 0 until N if (n % 2 == 1)) yield n
However, I don't want to miss the logging part. How do I rewrite the first version more compactly? It would be great if it can be rewritten similar to this:
def IWantToDoSomethingSimilar(N: Int) =
for (n <- 0 until N) if (n % 2 == 1) yield n else println("skip even number " + n)
def IWantToDoSomethingSimilar(N: Int) =
for {
n <- 0 until N
if n % 2 != 0 || { println("skip even number " + n); false }
} yield n
Using filter instead of a for expression would be slightly simpler though.
I you want to keep the sequentiality of your traitement (processing odds and evens in order, not separately), you can use something like that (edited) :
def IWantToDoSomethingSimilar(N: Int) =
(for (n <- (0 until N)) yield {
if (n % 2 == 1) {
Option(n)
} else {
println("skip even number " + n)
None
}
// Flatten transforms the Seq[Option[Int]] into Seq[Int]
}).flatten
EDIT, following the same concept, a shorter solution :
def IWantToDoSomethingSimilar(N: Int) =
(0 until N) map {
case n if n % 2 == 0 => println("skip even number "+ n)
case n => n
} collect {case i:Int => i}
If you will to dig into a functional approach, something like the following is a good point to start.
First some common definitions:
// use scalaz 7
import scalaz._, Scalaz._
// transforms a function returning either E or B into a
// function returning an optional B and optionally writing a log of type E
def logged[A, E, B, F[_]](f: A => E \/ B)(
implicit FM: Monoid[F[E]], FP: Pointed[F]): (A => Writer[F[E], Option[B]]) =
(a: A) => f(a).fold(
e => Writer(FP.point(e), None),
b => Writer(FM.zero, Some(b)))
// helper for fixing the log storage format to List
def listLogged[A, E, B](f: A => E \/ B) = logged[A, E, B, List](f)
// shorthand for a String logger with List storage
type W[+A] = Writer[List[String], A]
Now all you have to do is write your filtering function:
def keepOdd(n: Int): String \/ Int =
if (n % 2 == 1) \/.right(n) else \/.left(n + " was even")
You can try it instantly:
scala> List(5, 6) map(keepOdd)
res0: List[scalaz.\/[String,Int]] = List(\/-(5), -\/(6 was even))
Then you can use the traverse function to apply your function to a list of inputs, and collect both the logs written and the results:
scala> val x = List(5, 6).traverse[W, Option[Int]](listLogged(keepOdd))
x: W[List[Option[Int]]] = scalaz.WriterTFunctions$$anon$26#503d0400
// unwrap the results
scala> x.run
res11: (List[String], List[Option[Int]]) = (List(6 was even),List(Some(5), None))
// we may even drop the None-s from the output
scala> val (logs, results) = x.map(_.flatten).run
logs: List[String] = List(6 was even)
results: List[Int] = List(5)
I don't think this can be done easily with a for comprehension. But you could use partition.
def getOffs(N:Int) = {
val (evens, odds) = 0 until N partition { x => x % 2 == 0 }
evens foreach { x => println("skipping " + x) }
odds
}
EDIT: To avoid printing the log messages after the partitioning is done, you can change the first line of the method like this:
val (evens, odds) = (0 until N).view.partition { x => x % 2 == 0 }
Given n ( say 3 people ) and s ( say 100$ ), we'd like to partition s among n people.
So we need all possible n-tuples that sum to s
My Scala code below:
def weights(n:Int,s:Int):List[List[Int]] = {
List.concat( (0 to s).toList.map(List.fill(n)(_)).flatten, (0 to s).toList).
combinations(n).filter(_.sum==s).map(_.permutations.toList).toList.flatten
}
println(weights(3,100))
This works for small values of n. ( n=1, 2, 3 or 4).
Beyond n=4, it takes a very long time, practically unusable.
I'm looking for ways to rework my code using lazy evaluation/ Stream.
My requirements : Must work for n upto 10.
Warning : The problem gets really big really fast. My results from Matlab -
---For s =100, n = 1 thru 5 results are ---
n=1 :1 combinations
n=2 :101 combinations
n=3 :5151 combinations
n=4 :176851 combinations
n=5: 4598126 combinations
---
You need dynamic programming, or memoization. Same concept, anyway.
Let's say you have to divide s among n. Recursively, that's defined like this:
def permutations(s: Int, n: Int): List[List[Int]] = n match {
case 0 => Nil
case 1 => List(List(s))
case _ => (0 to s).toList flatMap (x => permutations(s - x, n - 1) map (x :: _))
}
Now, this will STILL be slow as hell, but there's a catch here... you don't need to recompute permutations(s, n) for numbers you have already computed. So you can do this instead:
val memoP = collection.mutable.Map.empty[(Int, Int), List[List[Int]]]
def permutations(s: Int, n: Int): List[List[Int]] = {
def permutationsWithHead(x: Int) = permutations(s - x, n - 1) map (x :: _)
n match {
case 0 => Nil
case 1 => List(List(s))
case _ =>
memoP getOrElseUpdate ((s, n),
(0 to s).toList flatMap permutationsWithHead)
}
}
And this can be even further improved, because it will compute every permutation. You only need to compute every combination, and then permute that without recomputing.
To compute every combination, we can change the code like this:
val memoC = collection.mutable.Map.empty[(Int, Int, Int), List[List[Int]]]
def combinations(s: Int, n: Int, min: Int = 0): List[List[Int]] = {
def combinationsWithHead(x: Int) = combinations(s - x, n - 1, x) map (x :: _)
n match {
case 0 => Nil
case 1 => List(List(s))
case _ =>
memoC getOrElseUpdate ((s, n, min),
(min to s / 2).toList flatMap combinationsWithHead)
}
}
Running combinations(100, 10) is still slow, given the sheer numbers of combinations alone. The permutations for each combination can be obtained simply calling .permutation on the combination.
Here's a quick and dirty Stream solution:
def weights(n: Int, s: Int) = (1 until s).foldLeft(Stream(Nil: List[Int])) {
(a, _) => a.flatMap(c => Stream.range(0, n - c.sum + 1).map(_ :: c))
}.map(c => (n - c.sum) :: c)
It works for n = 6 in about 15 seconds on my machine:
scala> var x = 0
scala> weights(100, 6).foreach(_ => x += 1)
scala> x
res81: Int = 96560646
As a side note: by the time you get to n = 10, there are 4,263,421,511,271 of these things. That's going to take days just to stream through.
My solution of this problem, it can computer n till 6:
object Partition {
implicit def i2p(n: Int): Partition = new Partition(n)
def main(args : Array[String]) : Unit = {
for(n <- 1 to 6) println(100.partitions(n).size)
}
}
class Partition(n: Int){
def partitions(m: Int):Iterator[List[Int]] = new Iterator[List[Int]] {
val nums = Array.ofDim[Int](m)
nums(0) = n
var hasNext = m > 0 && n > 0
override def next: List[Int] = {
if(hasNext){
val result = nums.toList
var idx = 0
while(idx < m-1 && nums(idx) == 0) idx = idx + 1
if(idx == m-1) hasNext = false
else {
nums(idx+1) = nums(idx+1) + 1
nums(0) = nums(idx) - 1
if(idx != 0) nums(idx) = 0
}
result
}
else Iterator.empty.next
}
}
}
1
101
5151
176851
4598126
96560646
However , we can just show the number of the possible n-tuples:
val pt: (Int,Int) => BigInt = {
val buf = collection.mutable.Map[(Int,Int),BigInt]()
(s,n) => buf.getOrElseUpdate((s,n),
if(n == 0 && s > 0) BigInt(0)
else if(s == 0) BigInt(1)
else (0 to s).map{k => pt(s-k,n-1)}.sum
)
}
for(n <- 1 to 20) printf("%2d :%s%n",n,pt(100,n).toString)
1 :1
2 :101
3 :5151
4 :176851
5 :4598126
6 :96560646
7 :1705904746
8 :26075972546
9 :352025629371
10 :4263421511271
11 :46897636623981
12 :473239787751081
13 :4416904685676756
14 :38393094575497956
15 :312629484400483356
16 :2396826047070372396
17 :17376988841260199871
18 :119594570260437846171
19 :784008849485092547121
20 :4910371215196105953021
I read in Programming in Scala section 23.5 that map, flatMap and filter operations can always be converted into for-comprehensions and vice-versa.
We're given the following equivalence:
def map[A, B](xs: List[A], f: A => B): List[B] =
for (x <- xs) yield f(x)
I have a value calculated from a series of map operations:
val r = (1 to 100).map{ i => (1 to 100).map{i % _ == 0} }
.map{ _.foldLeft(false)(_^_) }
.map{ case true => "open"; case _ => "closed" }
I'm wondering what this would look like as a for-comprehension. How do I translate it?
(If it's helpful, in words this is:
take integers from 1 to 100
for each, create a list of 100 boolean values
fold each list with an XOR operator, back into a boolean
yield a list of 100 Strings "open" or "closed" depending on the boolean
I imagine there is a standard way to translate map operations and the details of the actual functions in them is not important. I could be wrong though.)
Is this the kind of translation you're looking for?
for (i <- 1 to 100;
val x = (1 to 100).map(i % _ == 0);
val y = x.foldLeft(false)(_^_);
val z = y match { case true => "open"; case _ => "closed" })
yield z
If desired, the map in the definition of x could also be translated to an "inner" for-comprehension.
In retrospect, a series of chained map calls is sort of trivial, in that you could equivalently call map once with composed functions:
s.map(f).map(g).map(h) == s.map(f andThen g andThen h)
I find for-comprehensions to be a bigger win when flatMap and filter are involved. Consider
for (i <- 1 to 3;
j <- 1 to 3 if (i + j) % 2 == 0;
k <- 1 to 3) yield i ^ j ^ k
versus
(1 to 3).flatMap { i =>
(1 to 3).filter(j => (i + j) % 2 == 0).flatMap { j =>
(1 to 3).map { k => i ^ j ^ k }
}
}