I am new to Chisel HDL, and I found the Chisel HDL does provide fixed point respresentation. (I found this link:
Fixed Point Arithmetic in Chisel HDL)
when I try it in the chisel hdl it actually doesn't work:
import Chisel._
class Toy extends Module {
val io = new Bundle {
val in0 = SFix(4, 12).asInput
val in1 = SFix(4, 12).asInput
val out = SFix(4, 16).asOutput
val oraw = Bits(OUTPUT, width=128)
}
val int_result = -io.in0 * (io.in0 + io.in1)
io.out := int_result
io.oraw := int_result.raw
}
class ToyTest(c: Toy) extends Tester(c) {
for (i <- 0 until 20) {
val i0 = 0.5
val i1 = 0.25
poke(c.io.in0, i0)
poke(c.io.in1, i1)
val res = -i0 * (i0+i1)
step(1)
expect(c.io.out, res)
}
}
object Toy {
def main(args: Array[String]): Unit = {
val tutArgs = args.slice(1, args.length)
chiselMainTest(tutArgs, () => Module(new Toy())) {
c => new ToyTest(c)
}
}
}
which produce the error that:
In my build.sbt file, I choose the latest release chisel by:
libraryDependencies += "edu.berkeley.cs" %% "chisel" % "latest.release"
According to Chisel code SFix seems to be deprecated, Fixed should be used instead.
I modified your code to use it, but there is a problem with poke and expect. It seems that Fixed is not supported yet by poke and expect.
import Chisel._
class Toy extends Module {
val io = new Bundle {
val in0 = Fixed(INPUT, 4, 12)
val in1 = Fixed(INPUT, 4, 12)
val out = Fixed(OUTPUT, 8, 24)
val oraw = Bits(OUTPUT, width=128)
}
val int_result = -io.in0 * (io.in0 + io.in1)
io.out := int_result
io.oraw := int_result.asUInt()
}
class ToyTest(c: Toy) extends Tester(c) {
for (i <- 0 until 20) {
val i0 = Fixed(0.5, 4, 12)
val i1 = Fixed(0.25, 4, 12)
c.io.in0 := i0
c.io.in1 := i1
//poke(c.io.in0, i0)
//poke(c.io.in1, i1)
val res = -i0 * (i0+i1)
step(1)
//expect(c.io.out, res)
}
}
object Toy {
def main(args: Array[String]): Unit = {
val tutArgs = args.slice(1, args.length)
chiselMainTest(tutArgs, () => Module(new Toy())) {
c => new ToyTest(c)
}
}
}
Related
When I wrote this :
class MullerC(val WIDTH: Int = 2) extends Module {
val io = IO(new Bundle {
val in = Input(Vec(WIDTH, Bool()))
val out = Output(Bool())
})
io.out := false.B
when (io.in.reduce(_ & _)) {
io.out := true.B
}.elsewhen (io.in.map(!_).reduce(_ & _)) {
io.out := false.B
}
}
I got Verilog like this :
module MullerC(
input clock,
input reset,
input io_in_0,
input io_in_1,
output io_out
);
assign io_out = io_in_0 & io_in_1;
endmodule
That is a simple and gate instead of a C gate.
But when I tried to add otherwise like this :
class MullerC(val WIDTH: Int = 2) extends Module {
val io = IO(new Bundle {
val in = Input(Vec(WIDTH, Bool()))
val out = Output(Bool())
})
io.out := false.B
when (io.in.reduce(_ & _)) {
io.out := true.B
}.elsewhen (io.in.map(!_).reduce(_ & _)) {
io.out := false.B
}.otherwise {
io.out := io.out
}
}
It could not be compiled any more:
Exception in thread "main" firrtl.transforms.CheckCombLoops$CombLoopException: : [module MullerC] Combinational loop detected:
MullerC.io_out
MullerC._GEN_0 #[----.scala 14:38 ----.scala 15:12 ----.scala 17:12]
MullerC._GEN_1 #[----.scala 12:30 ----.scala 13:12]
MullerC.io_out
How should I implement the Muller C in Chisel? Many thanks.
I found the answer via this link: Disable FIRRTL pass that checks for combinational loops
I should use otherwise and then add --no-check-comb-loops as a parameter to emit verilog code. Thanks.
By the way, I also tried this and it works as well.
class MullerC(val WIDTH: Int = 2) extends Module {
val io = IO(new Bundle {
val in = Input(Vec(WIDTH, Bool()))
val out = Output(Bool())
})
io.out := false.B
val allTrue = Wire(Bool())
val allFalse = Wire(Bool())
allTrue := io.in.reduce(_ & _);
allFalse := io.in.map(!_).reduce(_ & _)
io.out := Mux(allTrue | allFalse, Mux(allTrue, true.B, false.B), io.out)
}
This will generate more beautiful verilog code althought it does not matter.
I'm trying to implement a Carry Select Adder using Chisel in Scala. However after various efforts i keep getting errors. The ripple carry adder and multiplexers are tested and working. This is my current Version. Any Input is greatly appreciated!
class CarrySelectAdder(val n:Int, val m:Int) extends Module {
val io = IO(new Bundle {
val a = Input(UInt(n.W))
val b = Input(UInt(n.W))
val cin = Input(UInt(1.W))
val sum = Output(UInt((n+1).W))
})
object carryselecthelper{ //not used atm
def n_sum_idx(stage:UInt) : UInt = {
return stage * (stage + 1.U) / 2.U;
}
}
//val test = math.ceil(n/m)
val rcas = Array.fill(2 * (n/m)){Module(new RcaAdder(m)).io} //2*math.ceil(n/m)
val muxs = Array.fill(n/m){Module(new Multiplexer(m+1)).io} //math.ceil(n/m)
val Sum = Wire(Vec(n, Bool()))
for (i <- 0 until n) {
rcas(i*2).a := io.a.apply(i*m+m,i*m)
rcas(i*2).b := io.b.apply(i*m+m,i*m)
rcas(i*2).cin := 0.asUInt
rcas(2*i+1).a := io.a.apply(i*m+m,i*m)
rcas(2*i+1).b := io.b.apply(i*m+m,i*m)
rcas(2*i+1).cin := 1.asUInt
muxs(i).a := rcas(i*2).sum
muxs(i).b := rcas(2*i+1).sum
if(i > 0){
muxs(i).sel := io.cin
}
if(i > 0){
if(muxs(i).sel == 0){
muxs(i).sel := rcas(2*i).cout
}else{
muxs(i).sel := rcas(2*i+1).cout
}
}
Sum(i) := muxs(i).out
}
io.sum := Sum.asUInt
}
Currently I'm getting an Exception in "ChiselGeneratorAnnotation":
l3.internal.ChiselException: Exception thrown when elaborating ChiselGeneratorAnnotation
at chisel3.stage.ChiselGeneratorAnnotation.elaborate(ChiselAnnotations.scala:55)
ChiselAnnotations.scala:55
at chisel3.stage.phases.Elaborate.$anonfun$transform$1(Elaborate.scala:19)
Elaborate.scala:19
at scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:245)
TraversableLike.scala:245
at scala.collection.immutable.List.foreach(List.scala:392)
List.scala:392
at scala.collection.TraversableLike.flatMap(TraversableLike.scala:245)
TraversableLike.scala:245
at scala.collection.TraversableLike.flatMap$(TraversableLike.scala:242)
TraversableLike.scala:242
at scala.collection.immutable.List.flatMap(List.scala:355)
List.scala:355
at chisel3.stage.phases.Elaborate.transform(Elaborate.scala:18)
Elaborate.scala:18
at chisel3.iotesters.setupTreadleBackend$.apply(TreadleBackend.scala:143)
TreadleBackend.scala:143
at chisel3.iotesters.Driver$.$anonfun$execute$2(Driver.scala:53)
Driver.scala:53
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at logger.Logger$.$anonfun$makeScope$2(Logger.scala:168)
Logger.scala:168
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
DynamicVariable.scala:62
at logger.Logger$.makeScope(Logger.scala:166)
Logger.scala:166
at logger.Logger$.makeScope(Logger.scala:127)
Logger.scala:127
at chisel3.iotesters.Driver$.$anonfun$execute$1(Driver.scala:38)
Driver.scala:38
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:23)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
DynamicVariable.scala:62
at chisel3.iotesters.Driver$.execute(Driver.scala:38)
Driver.scala:38
Upon request I added the Multiplexer and RCA class, they should work fine, since they ran all tests without issues. Thanks!
class Multiplexer(val n:Int) extends Module {
val io = IO(new Bundle {
val a = Input(UInt(n.W))
val b = Input(UInt(n.W))
val sel = Input(UInt(n.W))
val out = Output(UInt(n.W))
})
io.out := (io.sel & io.a) | (~io.sel & io.b)
}
class RcaAdder(val n:Int) extends Module {
val io = IO(new Bundle {
val a = Input(UInt(n.W))
val b = Input(UInt(n.W))
val cin = Input(UInt(1.W))
val sum = Output(UInt(n.W))
val cout = Output(UInt(1.W))
})
val FAs = Array.fill(n){Module(new FullAdder()).io}
val carry = Wire(Vec(n+1, UInt(1.W)))
val sum = Wire(Vec(n, Bool()))
carry(0) := io.cin
for (i <- 0 until n) {
FAs(i).a := io.a(i)
FAs(i).b := io.b(i)
FAs(i).cin := carry(i)
carry(i+1) := FAs(i).cout
sum(i) := FAs(i).sum.asBool
}
io.sum := sum.asUInt
io.cout := carry(n)
}
(This is what is used to test the code.)
class CarrySelectAdderTester(dut: CarrySelectAdder, n: Int) extends PeekPokeTester(dut) {
val max=scala.math.pow(2,n).toInt-1
for(i <- 0 to max){
for (j <- 0 to max){
for (k <- 0 to 1){
poke(dut.io.a,i)
poke(dut.io.b,j)
poke(dut.io.cin,k)
step(100)
expect(dut.io.sum,(i+j+k))
}
}
}
}
object CarrySelectAdderTester extends App{
val bitWidth=6
val bitsperblock=3
println("Testing the Carry Select Adder")
iotesters.Driver.execute(Array[String](), () => new CarrySelectAdder(bitWidth, bitsperblock)){
c => new CarrySelectAdderTester(c,bitWidth)
}
}
I need to convert a Float32 into a Chisel FixedPoint, perform some computation and convert back FixedPoint to Float32.
For example, I need the following:
val a = 3.1F
val b = 2.2F
val res = a * b // REPL returns res: Float 6.82
Now, I do this:
import chisel3.experimental.FixedPoint
val fp_tpe = FixedPoint(6.W, 2.BP)
val a_fix = a.Something (fp_tpe) // convert a to FixPoint
val b_fix = b.Something (fp_tpe) // convert b to FixPoint
val res_fix = a_fix * b_fix
val res0 = res_fix.Something (fp_tpe) // convert back to Float
As a result, I'd expect the delta to be in a range of , e.g
val eps = 1e-4
assert ( abs(res - res0) < eps, "The error is too big")
Who can provide a working example for Chisel3 FixedPoint class for the pseudocode above?
Take a look at the following code:
import chisel3._
import chisel3.core.FixedPoint
import dsptools._
class FPMultiplier extends Module {
val io = IO(new Bundle {
val a = Input(FixedPoint(6.W, binaryPoint = 2.BP))
val b = Input(FixedPoint(6.W, binaryPoint = 2.BP))
val c = Output(FixedPoint(12.W, binaryPoint = 4.BP))
})
io.c := io.a * io.b
}
class FPMultiplierTester(c: FPMultiplier) extends DspTester(c) {
//
// This will PASS, there is sufficient precision to model the inputs
//
poke(c.io.a, 3.25)
poke(c.io.b, 2.5)
step(1)
expect(c.io.c, 8.125)
//
// This will FAIL, there is not sufficient precision to model the inputs
// But this is only caught on output, this is likely the right approach
// because you can't really pass in wrong precision data in hardware.
//
poke(c.io.a, 3.1)
poke(c.io.b, 2.2)
step(1)
expect(c.io.c, 6.82)
}
object FPMultiplierMain {
def main(args: Array[String]): Unit = {
iotesters.Driver.execute(Array("-fiv"), () => new FPMultiplier) { c =>
new FPMultiplierTester(c)
}
}
}
I'd also suggest looking at ParameterizedAdder in dsptools, that gives you a feel of how to write hardware modules that you pass different types. Generally you start with DspReals, confirm the model then start experimenting/calculating with FixedPoint sizes that return results with the desired precision.
For others benefit, I provide an improved solution from #Chick, rewritten in a more abstract Scala with variable DSP tolerances.
package my_pkg
import chisel3._
import chisel3.core.{FixedPoint => FP}
import dsptools.{DspTester, DspTesterOptions, DspTesterOptionsManager}
class FPGenericIO (inType:FP, outType:FP) extends Bundle {
val a = Input(inType)
val b = Input(inType)
val c = Output(outType)
}
class FPMul (inType:FP, outType:FP) extends Module {
val io = IO(new FPGenericIO(inType, outType))
io.c := io.a * io.b
}
class FPMulTester(c: FPMul) extends DspTester(c) {
val uut = c.io
// This will PASS, there is sufficient precision to model the inputs
poke(uut.a, 3.25)
poke(uut.b, 2.5)
step(1)
expect(uut.c, 3.25*2.5)
// This will FAIL, if you won't increase tolerance, which is eps = 0.0 by default
poke(uut.a, 3.1)
poke(uut.b, 2.2)
step(1)
expect(uut.c, 3.1*2.2)
}
object FPUMain extends App {
val fpInType = FP(8.W, 4.BP)
val fpOutType = FP(12.W, 6.BP)
// Update default DspTester options and increase tolerance
val opts = new DspTesterOptionsManager {
dspTesterOptions = DspTesterOptions(
fixTolLSBs = 2,
genVerilogTb = false,
isVerbose = true
)
}
dsptools.Driver.execute (() => new FPMul(fpInType, fpOutType), opts) {
c => new FPMulTester(c)
}
}
Here's my ultimate DSP multiplier implementation, which should support both FixedPoint and DspComplex numbers. #ChickMarkley, how do I update this class to implement a Complex multiplication?
package my_pkg
import chisel3._
import dsptools.numbers.{Ring,DspComplex}
import dsptools.numbers.implicits._
import dsptools.{DspContext}
import chisel3.core.{FixedPoint => FP}
import dsptools.{DspTester, DspTesterOptions, DspTesterOptionsManager}
class FPGenericIO[A <: Data:Ring, B <: Data:Ring] (inType:A, outType:B) extends Bundle {
val a = Input(inType.cloneType)
val b = Input(inType.cloneType)
val c = Output(outType.cloneType)
override def cloneType = (new FPGenericIO(inType, outType)).asInstanceOf[this.type]
}
class FPMul[A <: Data:Ring, B <: Data:Ring] (inType:A, outType:B) extends Module {
val io = IO(new FPGenericIO(inType, outType))
DspContext.withNumMulPipes(3) {
io.c := io.a * io.b
}
}
class FPMulTester[A <: Data:Ring, B <: Data:Ring](c: FPMul[A,B]) extends DspTester(c) {
val uut = c.io
//
// This will PASS, there is sufficient precision to model the inputs
//
poke(uut.a, 3.25)
poke(uut.b, 2.5)
step(1)
expect(uut.c, 3.25*2.5)
//
// This will FAIL, there is not sufficient precision to model the inputs
// But this is only caught on output, this is likely the right approach
// because you can't really pass in wrong precision data in hardware.
//
poke(uut.a, 3.1)
poke(uut.b, 2.2)
step(1)
expect(uut.c, 3.1*2.2)
}
object FPUMain extends App {
val fpInType = FP(8.W, 4.BP)
val fpOutType = FP(12.W, 6.BP)
//val comp = DspComplex[Double] // How to declare a complex DSP type ?
val opts = new DspTesterOptionsManager {
dspTesterOptions = DspTesterOptions(
fixTolLSBs = 0,
genVerilogTb = false,
isVerbose = true
)
}
dsptools.Driver.execute (() => new FPMul(fpInType, fpOutType), opts) {
//dsptools.Driver.execute (() => new FPMul(comp, comp), opts) { // <-- this won't compile
c => new FPMulTester(c)
}
}
I created the following test that fit a simple linear regression model to a dummy streaming data.
I use hyper-parameters optimisation to find good values of stepSize, numiterations and initialWeights of the linear model.
Everything runs fine, except the last lines of the code that are commented out:
// Save the evaluations for further visualization
// val gridEvalsRDD = sc.parallelize(gridEvals)
// gridEvalsRDD.coalesce(1)
// .map(e => "%.3f\t%.3f\t%d\t%.3f".format(e._1, e._2, e._3, e._4))
// .saveAsTextFile("data/mllib/streaming")
The problem is with the SparkContext sc. If I initialize it at the beginning of a test, then the program shown errors. It looks like sc should be defined in some special way in order to avoid conflicts with scc (streaming spark context). Any ideas?
The whole code:
// scalastyle:off
package org.apache.spark.mllib.regression
import org.apache.spark.mllib.linalg.{Vector, Vectors}
import org.apache.spark.mllib.util.LinearDataGenerator
import org.apache.spark.streaming.dstream.DStream
import org.apache.spark.streaming.{StreamingContext, TestSuiteBase}
import org.apache.spark.streaming.TestSuiteBase
import org.scalatest.BeforeAndAfter
class StreamingLinearRegressionHypeOpt extends TestSuiteBase with BeforeAndAfter {
// use longer wait time to ensure job completion
override def maxWaitTimeMillis: Int = 20000
var ssc: StreamingContext = _
override def afterFunction() {
super.afterFunction()
if (ssc != null) {
ssc.stop()
}
}
def calculateMSE(output: Seq[Seq[(Double, Double)]], n: Int): Double = {
val mse = output
.map {
case seqOfPairs: Seq[(Double, Double)] =>
val err = seqOfPairs.map(p => math.abs(p._1 - p._2)).sum
err*err
}.sum / n
mse
}
def calculateRMSE(output: Seq[Seq[(Double, Double)]], n: Int): Double = {
val mse = output
.map {
case seqOfPairs: Seq[(Double, Double)] =>
val err = seqOfPairs.map(p => math.abs(p._1 - p._2)).sum
err*err
}.sum / n
math.sqrt(mse)
}
def dummyStringStreamSplit(datastream: Stream[String]) =
datastream.flatMap(txt => txt.split(" "))
test("Test 1") {
// create model initialized with zero weights
val model = new StreamingLinearRegressionWithSGD()
.setInitialWeights(Vectors.dense(0.0, 0.0))
.setStepSize(0.2)
.setNumIterations(25)
// generate sequence of simulated data for testing
val numBatches = 10
val nPoints = 100
val inputData = (0 until numBatches).map { i =>
LinearDataGenerator.generateLinearInput(0.0, Array(10.0, 10.0), nPoints, 42 * (i + 1))
}
// Without hyper-parameters optimization
withStreamingContext(setupStreams(inputData, (inputDStream: DStream[LabeledPoint]) => {
model.trainOn(inputDStream)
model.predictOnValues(inputDStream.map(x => (x.label, x.features)))
})) { ssc =>
val output: Seq[Seq[(Double, Double)]] = runStreams(ssc, numBatches, numBatches)
val rmse = calculateRMSE(output, nPoints)
println(s"RMSE = $rmse")
}
// With hyper-parameters optimization
val gridParams = Map(
"initialWeights" -> List(Vectors.dense(0.0, 0.0), Vectors.dense(10.0, 10.0)),
"stepSize" -> List(0.1, 0.2, 0.3),
"numIterations" -> List(25, 50)
)
val gridEvals = for (initialWeights <- gridParams("initialWeights");
stepSize <- gridParams("stepSize");
numIterations <- gridParams("numIterations")) yield {
val lr = new StreamingLinearRegressionWithSGD()
.setInitialWeights(initialWeights.asInstanceOf[Vector])
.setStepSize(stepSize.asInstanceOf[Double])
.setNumIterations(numIterations.asInstanceOf[Int])
withStreamingContext(setupStreams(inputData, (inputDStream: DStream[LabeledPoint]) => {
lr.trainOn(inputDStream)
lr.predictOnValues(inputDStream.map(x => (x.label, x.features)))
})) { ssc =>
val output: Seq[Seq[(Double, Double)]] = runStreams(ssc, numBatches, numBatches)
val cvRMSE = calculateRMSE(output, nPoints)
println(s"RMSE = $cvRMSE")
(initialWeights, stepSize, numIterations, cvRMSE)
}
}
// Save the evaluations for further visualization
// val gridEvalsRDD = sc.parallelize(gridEvals)
// gridEvalsRDD.coalesce(1)
// .map(e => "%.3f\t%.3f\t%d\t%.3f".format(e._1, e._2, e._3, e._4))
// .saveAsTextFile("data/mllib/streaming")
}
}
// scalastyle:on
How does one access the parameters used to construct a Module from inside the Tester that is testing it?
In the test below I am passing the parameters explicitly both to the Module and to the Tester. I would prefer not to have to pass them to the Tester but instead extract them from the module that was also passed in.
Also I am new to scala/chisel so any tips on bad techniques I'm using would be appreciated :).
import Chisel._
import math.pow
class TestA(dataWidth: Int, arrayLength: Int) extends Module {
val dataType = Bits(INPUT, width = dataWidth)
val arrayType = Vec(gen = dataType, n = arrayLength)
val io = new Bundle {
val i_valid = Bool(INPUT)
val i_data = dataType
val i_array = arrayType
val o_valid = Bool(OUTPUT)
val o_data = dataType.flip
val o_array = arrayType.flip
}
io.o_valid := io.i_valid
io.o_data := io.i_data
io.o_array := io.i_array
}
class TestATests(c: TestA, dataWidth: Int, arrayLength: Int) extends Tester(c) {
val maxData = pow(2, dataWidth).toInt
for (t <- 0 until 16) {
val i_valid = rnd.nextInt(2)
val i_data = rnd.nextInt(maxData)
val i_array = List.fill(arrayLength)(rnd.nextInt(maxData))
poke(c.io.i_valid, i_valid)
poke(c.io.i_data, i_data)
(c.io.i_array, i_array).zipped foreach {
(element,value) => poke(element, value)
}
expect(c.io.o_valid, i_valid)
expect(c.io.o_data, i_data)
(c.io.o_array, i_array).zipped foreach {
(element,value) => poke(element, value)
}
step(1)
}
}
object TestAObject {
def main(args: Array[String]): Unit = {
val tutArgs = args.slice(0, args.length)
val dataWidth = 5
val arrayLength = 6
chiselMainTest(tutArgs, () => Module(
new TestA(dataWidth=dataWidth, arrayLength=arrayLength))){
c => new TestATests(c, dataWidth=dataWidth, arrayLength=arrayLength)
}
}
}
If you make the arguments dataWidth and arrayLength members of TestA you can just reference them. In Scala this can be accomplished by inserting val into the argument list:
class TestA(val dataWidth: Int, val arrayLength: Int) extends Module ...
Then you can reference them from the test as members with c.dataWidth or c.arrayLength