I am working on a matrix summation kind of design. The compiler takes 4+hours to generate 1+million lines of codes. Every line is "assign....." I don't know if this is the inefficiency of the compiler or my coding style is bad. If someone could suggest some alternatives that will be great!
Here is the description of the code
The input will be AND with a random matrix element by element and summed up using .reduce, so the result matrix should be 140X6 vec, cat them together gives me a 840 bits output
(rndvec, which is supposed to be a 140x840x6 bits random matrix. since I don't know how to generate random value so I started with a fixed 140x6 to represent one row and feed it with input over and over again)
This following is my code
import Chisel._
import scala.collection.mutable.HashMap
import util.Random
class LBio(n: Int) extends Bundle {
var myinput = UInt(INPUT,840)
var myoutput = UInt (OUTPUT,840)
}
class Lbi(q: Int,n:Int,m :Int ) extends Module{
def mask(orig: Vec[UInt],maska:UInt,mi:Int)={
val result = Vec.fill(840){UInt(width =6)}
for (i<-0 until 840 ){
result(i) := orig(i)&Fill(6,maska(i)) //every bits of input AND with random vector
}
result
}
val io= new LBio(840)
val rndvec = Vec.fill(840){UInt("h13",6)} //random vector, for now its just replication of 0x13....
val resultvec = Vec.fill(140){UInt(width = 6)}
for (i<-0 until 140){
resultvec(i) := mask(rndvec,io.myinput,m).reduce(_+_) //add the entire row of 6 bits element together with reduce
}
io.myoutput := resultvec.toBits
}
The terminal report:
started inference
finished inference (4)
start width checking
finished width checking
started flattenning
finished flattening (941783)
resolving nodes to the components
finished resolving
started transforms
finished transforms
checking for combinational loops
NO COMBINATIONAL LOOP FOUND
COMPILING class TutorialExamples.Lbi 0 CHILDREN (0,0)
[success] Total time: 33453 s, completed Oct 16, 2013 10:32:10 PM
There's nothing obviously wrong with your Chisel code, but I should point out that if rndvec is 140x840x6 bits, that's ~689kB of state! And your reduce operation is on 5kB of state.
Chisel uses "assign" statements because your code is entirely combinational and Chisel produces a very structural form of Verilog.
I suspect the part that is killing the compile time (aside from the huge amount of state) is that you are generating and manipulating 140 Vecs with the mask() function.
I tried my hand at rewriting your code and got it down from 941,783 nodes to 202,723 (takes about 10-15 minutes to compile, but generates 11MB of Verilog code). I'm pretty sure this does what your code was doing:
class Hello(q: Int, dim_n:Int) extends Module
{
val io = new LBio(dim_n)
val rndvec = Vec.fill(dim_n){UInt("h13",6)}
val resultvec = Vec.fill(dim_n/6){UInt(width=6)}
// lift this work outside of the for loop
val padded_input = Vec.fill(dim_n){UInt(width=6)}
for (i <- 0 until dim_n)
{
padded_input(i) := Fill(6,io.myinput)
}
for (i <- 0 until dim_n/6)
{
val result = Bits(width=dim_n*6)
result := rndvec.toBits & padded_input.toBits
var sum = UInt(0) //advanced Chisel - be careful with the use of var!
for (j <- 0 until dim_n by 6)
{
sum = sum + result(j+6,j)
}
resultvec(i) := sum
}
io.myoutput := resultvec.toBits
}
What I did was avoid doing the same work over and over again - like padding out the myinput Vec inside of the for loop's mask() function. I also kept everything in Bits() instead of Vecs. Sadly it means I lose the awesome .reduce() function.
I think maybe the answer is "be cognizant of how much state you're creating" and "Vecs are awesome, but use carefully".
Do you have a Verilog version that's short and concise? It'd be interesting to see if there are areas where Chisel is losing out efficiency wise.
Related
I was looking for Scala's equivalent code or underlying theory for pythons np.random.choice (Numpy as np). I have a similar implementation that uses Python's np.random.choice method to select the random moves from the probability distribution.
Python's code
Input list: ['pooh', 'rabbit', 'piglet', 'Christopher'] and probabilies: [0.5, 0.1, 0.1, 0.3]
I want to select one of the value from the input list given the associated probability of each input element.
The Scala standard library has no equivalent to np.random.choice but it shouldn't be too difficult to build your own, depending on which options/features you want to emulate.
Here, for example, is a way to get an infinite Stream of submitted items, with the probability of any one item weighted relative to the others.
def weightedSelect[T](input :(T,Int)*): Stream[T] = {
val items :Seq[T] = input.flatMap{x => Seq.fill(x._2)(x._1)}
def output :Stream[T] = util.Random.shuffle(items).toStream #::: output
output
}
With this each input item is given with a multiplier. So to get an infinite pseudorandom selection of the characters c and v, with c coming up 3/5ths of the time and v coming up 2/5ths of the time:
val cvs = weightedSelect(('c',3),('v',2))
Thus the rough equivalent of the np.random.choice(aa_milne_arr,5,p=[0.5,0.1,0.1,0.3]) example would be:
weightedSelect("pooh"-> 5
,"rabbit" -> 1
,"piglet" -> 1
,"Christopher" -> 3).take(5).toArray
Or perhaps you want a better (less pseudo) random distribution that might be heavily lopsided.
def weightedSelect[T](items :Seq[T], distribution :Seq[Double]) :Stream[T] = {
assert(items.length == distribution.length)
assert(math.abs(1.0 - distribution.sum) < 0.001) // must be at least close
val dsums :Seq[Double] = distribution.scanLeft(0.0)(_+_).tail
val distro :Seq[Double] = dsums.init :+ 1.1 // close a possible gap
Stream.continually(items(distro.indexWhere(_ > util.Random.nextDouble())))
}
The result is still an infinite Stream of the specified elements but the passed-in arguments are a bit different.
val choices :Stream[String] = weightedSelect( List("this" , "that")
, Array(4998/5000.0, 2/5000.0))
// let's test the distribution
val (choiceA, choiceB) = choices.take(10000).partition(_ == "this")
choiceA.length //res0: Int = 9995
choiceB.length //res1: Int = 5 (not bad)
Please I have problems manipulating arithmetic operations with doubles in chisel. I have been seeing examples that uses just the following types: Int,UInt,SInt.
I saw here that arithmetic operations where described only for SInt and UInt. What about Double?
I tried to declare my output out as Double, but didn't know how. Because the output of my code is Double.
Is there a way to declare in Bundle an input and an output of type Double?
Here is my code:
class hashfunc(val k:Int, val n: Int ) extends Module {
val a = k + k
val io = IO(new Bundle {
val b=Input(UInt(k.W))
val w=Input(UInt(k.W))
var out = Output(UInt(a.W))
})
val tabHash1 = new Array[Array[Double]](n)
val x = new ArrayBuffer[(Double, Data)]
val tabHash = new Array[Double](tabHash1.size)
for (ind <- tabHash1.indices){
var sum=0.0
for (ind2 <- 0 until x.size){
sum += ( x(ind2) * tabHash1(ind)(ind2) )
}
tabHash(ind) = ((sum + io.b) / io.w)
}
io.out := tabHash.reduce(_ + _)
}
When I compile the code, I get the following error:
code error
Thank you for your kind attention, looking forward to your responses.
Chisel does have a native FixedPoint type which maybe of use. It is in the experimental package
import chisel3.experimental.FixedPoint
There is also a project DspTools that has simulation support for Doubles. There are some nice features, e.g. it that allows modules to parameterized on the numeric types (Complex, Double, FixedPoint, SInt) so that you can run simulations on double to validate the desired mathematical behavior and then switch to a synthesizable number format that meets your precision criteria.
DspTools is an ongoing research projects and the team would appreciate outside users feedback.
Operations on floating point numbers (Double in this case) are not supported directly by any HDL. The reason for this is that while addition/subtraction/multiplication of fixed point numbers is well defined there are a lot of design space trade-offs for floating point hardware as it is a much more complex piece of hardware.
That is to say, a high performance floating point unit is a significant piece of hardware in it's own right and would be time shared in any realistic design.
I have tasks that I want to execute concurrently and each task takes substantial amount of memory so I have to execute them in batches of 2 to conserve memory.
def runme(n: Int = 120) = (1 to n).grouped(2).toList.flatMap{tuple =>
tuple.par.map{x => {
println(s"Running $x")
val s = (1 to 100000).toList // intentionally to make the JVM allocate a sizeable chunk of memory
s.sum.toLong
}}
}
val result = runme()
println(result.size + " => " + result.sum)
The result I expected from the output was 120 => 84609924480 but the output was rather random. The returned collection size differed from execution to execution. Most of the time there was missing count even though all the futures were executed looking at the console. I thought flatMap waits the parallel executions in map to complete before returning the complete. What should I do to always get the right result using par? Thanks
Just for the record: changing the underlying collection in this case shouldn't change the output of your program. The problem is related to this known bug. It's fixed from 2.11.6, so if you use that (or higher) Scala version, you should not see the strange behavior.
And about the overflow, I still think that your expected value is wrong. You can check that the sum is overflowing because the list is of integers (which are 32 bit) while the total sum exceeds the integer limits. You can check it with the following snippet:
val n = 100000
val s = (1 to n).toList // your original code
val yourValue = s.sum.toLong // your original code
val correctValue = 1l * n * (n + 1) / 2 // use math formula
var bruteForceValue = 0l // in case you don't trust math :) It's Long because of 0l
for (i ← 1 to n) bruteForceValue += i // iterate through range
println(s"yourValue = $yourValue")
println(s"correctvalue = $correctValue")
println(s"bruteForceValue = $bruteForceValue")
which produces the output
yourValue = 705082704
correctvalue = 5000050000
bruteForceValue = 5000050000
Cheers!
Thanks #kaktusito.
It worked after I changed the grouped list to Vector or Seq i.e. (1 to n).grouped(2).toList.flatMap{... to (1 to n).grouped(2).toVector.flatMap{...
Here's an example of a snippet of code that, at first impression, looks like something that scalac could easily optimize away:
val t0 = System.nanoTime()
for (i <- 0 to 1000000000) {}
val t1 = System.nanoTime()
var i = 0
while (i < 1000000000) i += 1
val t2 = System.nanoTime()
println((t1 - t0).toDouble / (t2 - t1).toDouble)
The above code prints 76.30068413477652, and the ratio seems to get worse as the number of iterations is increased.
Is there a particular reason scalac chooses to not optimize for (i <- L to/until H) into whatever bytecode form javac generates for for (int i = L; i < H; i += 1)? Might it be because Scala chooses to keep stuff simple and expect the developer to simply resort to the more performant forms such as a while loop when raw looping speed is required? If yes, why is that good, given the frequency of such simple for loops?
for-comprehensions performance in Scala is a very long running debate right now.
See the following links:
http://www.scala-lang.org/old/node/9637.html
http://scala-programming-language.1934581.n4.nabble.com/optimizing-simple-fors-td2545502.html
TL,DR: the Scala team decided to concentrate on more general optimizations than the ones that would have to favour some particular classes and edge-cases (in this case: Range).
I am trying to work more with scalas immutable collection since this is easy to parallelize, but i struggle with some newbie problems. I am looking for a way to create (efficiently) a new Vector from an operation. To be precise I want something like
val v : Vector[Double] = RandomVector(10000)
val w : Vector[Double] = RandomVector(10000)
val r = v + w
I tested the following:
// 1)
val r : Vector[Double] = (v.zip(w)).map{ t:(Double,Double) => t._1 + t._2 }
// 2)
val vb = new VectorBuilder[Double]()
var i=0
while(i<v.length){
vb += v(i) + w(i)
i = i + 1
}
val r = vb.result
}
Both take really long compared to the work with Array:
[Vector Zip/Map ] Elapsed time 0.409 msecs
[Vector While Loop] Elapsed time 0.374 msecs
[Array While Loop ] Elapsed time 0.056 msecs
// with warm-up (10000) and avg. over 10000 runs
Is there a better way to do it? I think the work with zip/map/reduce has the advantage that it can run in parallel as soon as the collections have support for this.
Thanks
Vector is not specialized for Double, so you're going to pay a sizable performance penalty for using it. If you are doing a simple operation, you're probably better off using an array on a single core than a Vector or other generic collection on the entire machine (unless you have 12+ cores). If you still need parallelization, there are other mechanisms you can use, such as using scala.actors.Futures.future to create instances that each do the work on part of the range:
val a = Array(1,2,3,4,5,6,7,8)
(0 to 4).map(_ * (a.length/4)).sliding(2).map(i => scala.actors.Futures.future {
var s = 0
var j = i(0)
while (j < i(1)) {
s += a(j)
j += 1
}
s
}).map(_()).sum // _() applies the future--blocks until it's done
Of course, you'd need to use this on a much longer array (and on a machine with four cores) for the parallelization to improve things.
You should use lazily built collections when you use more than one higher-order methods:
v1.view zip v2 map { case (a,b) => a+b }
If you don't use a view or an iterator each method will create a new immutable collection even when they are not needed.
Probably immutable code won't be as fast as mutable but the lazy collection will improve execution time of your code a lot.
Arrays are not type-erased, Vectors are. Basically, JVM gives Array an advantage over other collections when handling primitives that cannot be overcome. Scala's specialization might decrease that advantage, but, given their cost in code size, they can't be used everywhere.