Efficiently way to read binary files in scala - scala

I'm trying to read a binary file (16 MB) in which I have only integers coded on 16 bits. So for that, I used chunks of 1 MB which gives me an array of bytes. For my own needs, I convert this byte array to a short array with the following function convert but reading this file with a buffer and convert it into a short array take me 5 seconds, is it a faster way than my solution ?
def convert(in: Array[Byte]): Array[Short] = in.grouped(2).map {
case Array(one) => (one << 8 | (0 toByte)).toShort
case Array(hi, lo) => (hi << 8 | lo).toShort
} .toArray
val startTime = System.nanoTime()
val file = new RandomAccessFile("foo","r")
val defaultBlockSize = 1 * 1024 * 1024
val byteBuffer = new Array[Byte](defaultBlockSize)
val chunkNums = (file.length / defaultBlockSize).toInt
for (i <- 1 to chunkNums) {
val seek = (i - 1) * defaultBlockSize
file.seek(seek)
file.read(byteBuffer)
val s = convert(byteBuffer)
println(byteBuffer size)
}
val stopTime = System.nanoTime()
println("Perf of = " + ((stopTime - startTime) / 1000000000.0) + " for a duration of " + duration + " s")

16 MB easily fits in memory unless you're running this on a feature phone or something. No need to chunk it and make the logic harder.
Just gulp the whole file at once with java.nio.files.Files.readAllBytes:
val buffer = java.nio.files.Files.readAllBytes(myfile.toPath)
assuming you are not stuck with Java 1.6. (If you are stuck with Java 1.6, pre-allocate your buffer size using myfile.size, and use read on a FileInputStream to get it all in one go. It's not much harder, just don't forget to close it!)
Then if you don't want to convert it yourself, you can
val bb = java.nio.ByteBuffer.wrap(buffer)
bb.order(java.nio.ByteOrder.nativeOrder)
val shorts = new Array[Short](buffer.length/2)
bb.asShortBuffer.get(shorts)
And you're done.
Note that this is all Java stuff; there's nothing Scala-specific here save the syntax.
If you're wondering why this is so much faster than your code, it's because grouped(2) boxes the bytes and places them in an array. That's three allocations for every short you want! You can do it yourself by indexing the array directly, and that will be fast, but why would you want to when ByteBuffer and friends do exactly what you need already?
If you really really care about that last (odd) byte, then you can use (buffer.length + 1)/2 for the size of shorts, and tack on a if ((buffer.length) & 1 == 1) shorts(shorts.length-1) = ((bb.get&0xFF) << 8).toShort to grab the last byte.

A couple of issues pop out:
If byteBuffer is always going to be 1024*1024 size then the case Array(one) in convert will never actually be used and therefore pattern matching is unnecessary.
Also, you can avoid the for loop with a tail recursive function. After the val byteBuffer = ... line you can replace the chunkNums and for loop with:
#scala.annotation.tailrec
def readAndConvert(b: List[Array[Short]], file : RandomAccessFile) : List[Array[Short]] = {
if(file.read(byteBuffer) < 0)
b
else {
file.skipBytes(1024*1024)
readAndConvert(b.+:(convert(byteBuffer)), file)
}
}
val sValues = readAndConvert(List.empty[Array[Short]], file)
Note: because list preppending is much faster than appending the above loop gets you the converted value in reverse order from the reading order in the file.

Related

How to generate a random sequence of binary strings of fixed size ( say 36 bits ) in scala

I'm trying to generate a unique random sequence of 50 Binary strings of size 36 bits each. I tried doing nextInt followed by toBinaryString which didn't solve my problem as nextInt don't support such big numbers and also checked nextString which generates string of some random characters (not 0/1) is there any other way to achieve this ?
And to add one more requirement I want 36 bits to be present at every time suppose if some random generator generated 3 as a number I want the output as 000...(34)11.
I'm quite new to scala, Pardon me if my question seemed irrelavant or redundant.
You can try
val r = scala.util.Random
val a: immutable.Seq[Int] = (0 to 50).map(_ => r.nextInt(1000000))
val y = a.map( x => {
val bin = x.toBinaryString
val zero = 36 - bin.length
List.fill(zero)(0).mkString("") ++ bin
})
println(Random.shuffle(y))

Expensive flatMap() operation on streams originating from Stream.emits()

I just encountered an issue with degrading fs2 performance using a stream of strings to be written to a file via text.utf8encode. I tried to change my source to use chunked strings to increase performance, but the observation was performance degradation instead.
As far as I can see, it boils down to the following: Invoking flatMap on a stream that originates from Stream.emits() can be very expensive. Time usage seems to be exponential based on the size of the sequence passed to Stream.emits(). The code snippet below shows an example:
/*
Test done with scala 2.11.11 and fs2 version 0.10.0-M7.
*/
val rangeSize = 20000
val integers = (1 to rangeSize).toVector
// Note that the last flatMaps are just added to show extreme load for streamA.
val streamA = Stream.emits(integers).flatMap(Stream.emit(_))
val streamB = Stream.range(1, rangeSize + 1).flatMap(Stream.emit(_))
streamA.toVector // Uses approx. 25 seconds (!)
streamB.toVector // Uses approx. 15 milliseconds
Is this a bug, or should usage of Stream.emits() for large sequences be avoided?
TLDR: Allocations.
Longer answer:
Interesting question. I ran a JFR profile on both methods separately, and looked at the results. First thing which immediately attracted my eye was the amount of allocations.
Stream.emit:
Stream.range:
We can see that Stream.emit allocates a significant amount of Append instances, which are the concrete implementation of Catenable[A], which is the type used in Stream.emit to fold:
private[fs2] final case class Append[A](left: Catenable[A], right: Catenable[A]) extends Catenable[A]
This actually comes from the implementation of how Catenable[A] implemented foldLeft:
foldLeft(empty: Catenable[B])((acc, a) => acc :+ f(a))
Where :+ allocates a new Append object for each element. This means we're at least generating 20000 such Append objects.
There is also a hint in the documentation of Stream.range about how it produces a single chunk instead of dividing the stream further, which may be bad if this was a big range we're generating:
/**
* Lazily produce the range `[start, stopExclusive)`. If you want to produce
* the sequence in one chunk, instead of lazily, use
* `emits(start until stopExclusive)`.
*
* #example {{{
* scala> Stream.range(10, 20, 2).toList
* res0: List[Int] = List(10, 12, 14, 16, 18)
* }}}
*/
def range(start: Int, stopExclusive: Int, by: Int = 1): Stream[Pure,Int] =
unfold(start){i =>
if ((by > 0 && i < stopExclusive && start < stopExclusive) ||
(by < 0 && i > stopExclusive && start > stopExclusive))
Some((i, i + by))
else None
}
You can see that there is no additional wrapping here, only the integers that get emitted as part of the range. On the other hand, Stream.emits creates an Append object for every element in the sequence, where we have a left containing the tail of the stream, and right containing the current value we're at.
Is this a bug? I would say no, but I would definitely open this up as a performance issue to the fs2 library maintainers.

Scala Range.Double missing last element

I am trying to create a list of numBins numbers evenly spaced in the range [lower,upper). Of course, there are floating point issues and this approach is not the best. The result of using Range.Double, however, surprises me as the element missing is not close to the upper bound at all.
Setup:
val lower = -1d
val upper = 1d
val numBins = 11
val step = (upper-lower)/numBins // step = 0.18181818181818182
Problem:
scala> Range.Double(lower, upper, step)
res0: scala.collection.immutable.NumericRange[Double] = NumericRange(-1.0, -0.8181818181818182, -0.6363636363636364, -0.45454545454545453, -0.2727272727272727, -0.0909090909090909, 0.09090909090909093, 0.27272727272727276, 0.4545454545454546, 0.6363636363636364)
Issue: The list seems to be one element short. 0.8181818181818183 is one step further, and is less than 1.
Workaround:
Scala> for (bin <- 0 until numBins) yield lower + bin * step
res1: scala.collection.immutable.IndexedSeq[Double] = Vector(-1.0, -0.8181818181818181, -0.6363636363636364, -0.4545454545454546, -0.2727272727272727, -0.09090909090909083, 0.09090909090909083, 0.2727272727272727, 0.4545454545454546, 0.6363636363636365, 0.8181818181818183)
This result now contains the expected number of elements, including 0.818181..
I think the root cause of your problem is some features in implementation of toString for NumericRange
217 override def toString() = {
218 val endStr = if (length > Range.MAX_PRINT) ", ... )" else ")"
219 take(Range.MAX_PRINT).mkString("NumericRange(", ", ", endStr)
220 }
UPD: It's not about toString. Some other methods like map and foreach cut last elements from returned collection.
Anyway by checking size of collection you've got - you'll find out - all elements are there.
What you've done in your workaround example - is used different underlying datatype.

Different result returned using Scala Collection par in a series of runs

I have tasks that I want to execute concurrently and each task takes substantial amount of memory so I have to execute them in batches of 2 to conserve memory.
def runme(n: Int = 120) = (1 to n).grouped(2).toList.flatMap{tuple =>
tuple.par.map{x => {
println(s"Running $x")
val s = (1 to 100000).toList // intentionally to make the JVM allocate a sizeable chunk of memory
s.sum.toLong
}}
}
val result = runme()
println(result.size + " => " + result.sum)
The result I expected from the output was 120 => 84609924480 but the output was rather random. The returned collection size differed from execution to execution. Most of the time there was missing count even though all the futures were executed looking at the console. I thought flatMap waits the parallel executions in map to complete before returning the complete. What should I do to always get the right result using par? Thanks
Just for the record: changing the underlying collection in this case shouldn't change the output of your program. The problem is related to this known bug. It's fixed from 2.11.6, so if you use that (or higher) Scala version, you should not see the strange behavior.
And about the overflow, I still think that your expected value is wrong. You can check that the sum is overflowing because the list is of integers (which are 32 bit) while the total sum exceeds the integer limits. You can check it with the following snippet:
val n = 100000
val s = (1 to n).toList // your original code
val yourValue = s.sum.toLong // your original code
val correctValue = 1l * n * (n + 1) / 2 // use math formula
var bruteForceValue = 0l // in case you don't trust math :) It's Long because of 0l
for (i ← 1 to n) bruteForceValue += i // iterate through range
println(s"yourValue = $yourValue")
println(s"correctvalue = $correctValue")
println(s"bruteForceValue = $bruteForceValue")
which produces the output
yourValue = 705082704
correctvalue = 5000050000
bruteForceValue = 5000050000
Cheers!
Thanks #kaktusito.
It worked after I changed the grouped list to Vector or Seq i.e. (1 to n).grouped(2).toList.flatMap{... to (1 to n).grouped(2).toVector.flatMap{...

Adding immutable Vectors

I am trying to work more with scalas immutable collection since this is easy to parallelize, but i struggle with some newbie problems. I am looking for a way to create (efficiently) a new Vector from an operation. To be precise I want something like
val v : Vector[Double] = RandomVector(10000)
val w : Vector[Double] = RandomVector(10000)
val r = v + w
I tested the following:
// 1)
val r : Vector[Double] = (v.zip(w)).map{ t:(Double,Double) => t._1 + t._2 }
// 2)
val vb = new VectorBuilder[Double]()
var i=0
while(i<v.length){
vb += v(i) + w(i)
i = i + 1
}
val r = vb.result
}
Both take really long compared to the work with Array:
[Vector Zip/Map ] Elapsed time 0.409 msecs
[Vector While Loop] Elapsed time 0.374 msecs
[Array While Loop ] Elapsed time 0.056 msecs
// with warm-up (10000) and avg. over 10000 runs
Is there a better way to do it? I think the work with zip/map/reduce has the advantage that it can run in parallel as soon as the collections have support for this.
Thanks
Vector is not specialized for Double, so you're going to pay a sizable performance penalty for using it. If you are doing a simple operation, you're probably better off using an array on a single core than a Vector or other generic collection on the entire machine (unless you have 12+ cores). If you still need parallelization, there are other mechanisms you can use, such as using scala.actors.Futures.future to create instances that each do the work on part of the range:
val a = Array(1,2,3,4,5,6,7,8)
(0 to 4).map(_ * (a.length/4)).sliding(2).map(i => scala.actors.Futures.future {
var s = 0
var j = i(0)
while (j < i(1)) {
s += a(j)
j += 1
}
s
}).map(_()).sum // _() applies the future--blocks until it's done
Of course, you'd need to use this on a much longer array (and on a machine with four cores) for the parallelization to improve things.
You should use lazily built collections when you use more than one higher-order methods:
v1.view zip v2 map { case (a,b) => a+b }
If you don't use a view or an iterator each method will create a new immutable collection even when they are not needed.
Probably immutable code won't be as fast as mutable but the lazy collection will improve execution time of your code a lot.
Arrays are not type-erased, Vectors are. Basically, JVM gives Array an advantage over other collections when handling primitives that cannot be overcome. Scala's specialization might decrease that advantage, but, given their cost in code size, they can't be used everywhere.