Is it possible to test chisel Reg() in console? - scala

To test Chisel code, I launch a console sbt then scala in the directory of my project where is the file build.sbt. I can import chisel3 library :
$ cd myproject
$ sbt
sbt:myproject> console
scala> import chisel3._
import chisel3._
Then I can test some chisel code for data type for example :
scala> val plop = "b01010101".U(20.W)
plop: chisel3.UInt = UInt<20>(85)
But I can test Reg() or other Module() elements :
scala> val plopReg = RegInit(23.U(24.W))
java.lang.IllegalArgumentException: requirement failed: must be inside Builder context
at scala.Predef$.require(Predef.scala:281)
at chisel3.internal.Builder$.dynamicContext(Builder.scala:232)
at chisel3.internal.Builder$.currentClock(Builder.scala:308)
at chisel3.internal.Builder$.forcedClock(Builder.scala:318)
at chisel3.RegInit$.apply(Reg.scala:155)
at chisel3.RegInit$.apply(Reg.scala:173)
... 36 elided
Is there a tips to test these chisel element in console ? Or is it mandatory to write a file code source ?

What's going on here is that UInt is a Chisel type while Reg is a hardware type.
You can play with hardware types only inside a module. I often do something like the following to play with them on the console:
import chisel3._
import chisel3.stage.{ChiselStage, ChiselGeneratorAnnotation}
import chisel3.util.Cat
import firrtl.EmittedCircuitAnnotation
class Foo extends MultiIOModule {
val in = IO(Input(Bool()))
val out = IO(Output(Bool()))
val tmp = RegNext(~in)
out := tmp
}
val args = Array(
"-X", "verilog",
"-E", "high",
"-E", "middle",
"-E", "low",
"-E", "verilog")
(new ChiselStage).execute(args, Seq(ChiselGeneratorAnnotation(() => new Foo)))
You can then look at the various outputs inside your chisel3 top-level directory.
More Information
What's going on, specifically, is that UInt (and things like it) are factory methods that generate classes (technically UInt is really an object that extends UIntFactory). When you do UInt(4.W), that's constructing a new UInt. You should be able to construct new classes anywhere you want which is why this works on the console.
However, when you do Reg(UInt(4.W)) that's interacting with global mutable state used during the elaboration process to associate a register with a specific module. This global mutable state is stored inside the Builder. The error that you get is coming from the Builder where you've tried to use its methods without first being inside a module.

Related

Write method to a class dynamically at runtime in scala and create a jar

I would like to understand is there a way to write a method to existing class at runtime and to create a jar dynamically in scala.
So far i tried to create a class dynamically and able to run it thru reflection, however the class is dynamic class which isnt generated.
val mirror = runtimeMirror(getClass.getClassLoader)
val tb = ToolBox(mirror).mkToolBox()
val function = q"def function(x: Int): Int = x + 2"
val functionWrapper = "object FunctionWrapper { " + function + "}"
data.map(x => tb.eval(q"$functionSymbol.function($x)"))
i got this from other source, however the class is available only for this run and will not be generated.
i would like to add a function to the existing class at runtime and able to compile it and create a jar for it.
Kindly suggest me the way
Thanks in advance.
I guess the code snippet you provided should actually look like
import scala.reflect.runtime.universe._
import scala.tools.reflect.ToolBox
val mirror = runtimeMirror(getClass.getClassLoader)
val tb = ToolBox(mirror).mkToolBox()
val function: Tree = q"def function(x: Int): Int = x + 2"
val functionWrapper: Symbol = tb.define(q"object FunctionWrapper { $function }".asInstanceOf[ImplDef])
val data: List[Tree] = List(q"1", q"2")
data.map(x => tb.eval(q"$functionWrapper.function($x)")) // List(3, 4)
... however the class is dynamic class which isnt generated.
... however the class is available only for this run and will not be generated.
How did you check that the class is not generated? (Which class, FunctionWrapper?)
is there a way to write a method to existing class at runtime and to create a jar dynamically in scala.
i would like to add a function to the existing class at runtime and able to compile it and create a jar for it.
What is "existing class"? Do you have access to its sources? Then you can modify the sources, compile them etc.
Does the class exist as a .class file? You can modify its byte code with Byte-buddy, ASM, Javassist, cglib etc., instrument the byte code with aspects etc.
Is it dynamic class (like FunctionWrapper above)? How did you create it? (For FunctionWrapper you have access to its Symbol so you can use it in further sources.)
Is the class already loaded? Then you'll have to play with class loaders (unload, modify, load modified).
Can a Java class add a method to itself at runtime?
In Java, given an object, is it possible to override one of the methods?

loading external scala scripts into a scala file

i originally made scripts with many functions on 2 individual scala worksheets. i got them working and now want to tie these individual scripts together by importing and using them into a third file. from what i have read you can not simply import external scripts you must first make them into a class and put them into a package. so i tried that but i still couldn't import it
i know this may be a bit basic for this site but im struggling to find much scala documentation.
i think my problem might span from a missunderstanding of how packages work. the picture below might help.
my program example
adder.scala
package adder
class adder {
def add_to_this(AA:Int):Int={
var BB = AA + 1;
return BB
}
}
build.scala
package builder
class build {
def make_numbers(){
var a = 0;
var b = 0;}
}
main.sc
import adder
import builder
object main {
adder.adder.add_to_this(10);
}
the errors i get are
object is not a member of package adder
object is not a member of package builder
Classes in scala slightly differ from classes in java. If you need something like singleton, you'll want to use object instead of class i.e.:
package com.example
object Main extends App {
object Hide{
object Adder{
def addToThis(AA:Int):Int = AA + 1
}
}
object Example{
import com.example.Main.Hide.Adder
def run(): Unit = println(Adder.addToThis(10))
}
Example.run()
}
Consider objects like packages/modules which are also regular values. You can import an object by its full path, i.e. com.example.Main.Hide.Adder you can also import contents of an object by adding .{addToThis}, or import anything from object by adding ._ after an object.
Note that classes, traits and case classes could not be used as objects, you can't do anything with it unless you have an instance - there are no static modifier.

snakeyaml and spark results in an inability to construct objects

The following code executes fine in a scala shell given snakeyaml version 1.17
import org.yaml.snakeyaml.Yaml
import org.yaml.snakeyaml.constructor.Constructor
import scala.collection.mutable.ListBuffer
import scala.beans.BeanProperty
class EmailAccount {
#scala.beans.BeanProperty var accountName: String = null
override def toString: String = {
return s"acct ($accountName)"
}
}
val text = """accountName: Ymail Account"""
val yaml = new Yaml(new Constructor(classOf[EmailAccount]))
val e = yaml.load(text).asInstanceOf[EmailAccount]
println(e)
However when running in spark (2.0.0 in this case) the resulting error is:
org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:EmailAccount; exception=java.lang.NoSuchMethodException: EmailAccount.<init>()
in 'string', line 1, column 1:
accountName: Ymail Account
^
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:350)
at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:182)
at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:141)
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:127)
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:450)
at org.yaml.snakeyaml.Yaml.load(Yaml.java:369)
... 48 elided
Caused by: org.yaml.snakeyaml.error.YAMLException: java.lang.NoSuchMethodException: EmailAccount.<init>()
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.createEmptyJavaBean(Constructor.java:220)
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.construct(Constructor.java:190)
at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:346)
... 53 more
Caused by: java.lang.NoSuchMethodException: EmailAccount.<init>()
at java.lang.Class.getConstructor0(Class.java:2810)
at java.lang.Class.getDeclaredConstructor(Class.java:2053)
at org.yaml.snakeyaml.constructor.Constructor$ConstructMapping.createEmptyJavaBean(Constructor.java:216)
... 55 more
I launched the scala shell with
scala -classpath "/home/placey/snakeyaml-1.17.jar"
I launched the spark shell with
/home/placey/Downloads/spark-2.0.0-bin-hadoop2.7/bin/spark-shell --master local --jars /home/placey/snakeyaml-1.17.jar
Solution
Create a self-contained application and run it using spark-submit instead of using spark-shell.
I've created a minimal project for you as a gist here. All you need to do is put both files (build.sbt and Main.scala) in some directory, then run:
sbt package
in order to create a JAR. The JAR will be in target/scala-2.11/sparksnakeyamltest_2.11-1.0.jar or a similar location. You can get SBT from here if you haven't used it yet. Finally, you can run the project:
/home/placey/Downloads/spark-2.0.0-bin-hadoop2.7/bin/spark-submit --class "Main" --master local --jars /home/placey/snakeyaml-1.17.jar target/scala-2.11/sparksnakeyamltest_2.11-1.0.jar
The output should be:
[many lines of Spark's log)]
acct (Ymail Account)
[more lines of Spark's log)]
Explanation
Spark's shell (REPL) transforms all classes you define in it by adding $iw parameter to your constructors. I've explained it here. SnakeYAML expects a zero-parameter constructor for JavaBean-like classes, but there isn't one, so it fails.
You can try this yourself:
scala> class Foo() {}
defined class Foo
scala> classOf[Foo].getConstructors()
res0: Array[java.lang.reflect.Constructor[_]] = Array(public Foo($iw))
scala> classOf[Foo].getConstructors()(0).getParameterCount
res1: Int = 1
As you can see, Spark transforms the constructor by adding a parameter of type $iw.
Alternative solutions
Define your own Constructor
If you really need to get it working in the shell, you could define your own class implementing org.yaml.snakeyaml.constructor.BaseConstructor and make sure that $iw gets passed to constructors, but this is a lot of work (I actually wrote my own Constructor in Scala for security reasons some time ago, so I have some experience with this).
You could also define a custom Constructor hard-coded to instantiate a specific class (EmailAccount in your case) similar to the DiceConstructor shown in SnakeYAML's documentation. This is much easier, but requires writing code for each class you want to support.
Example:
case class EmailAccount(accountName: String)
class EmailAccountConstructor extends org.yaml.snakeyaml.constructor.Constructor {
val emailAccountTag = new org.yaml.snakeyaml.nodes.Tag("!emailAccount")
this.rootTag = emailAccountTag
this.yamlConstructors.put(emailAccountTag, new ConstructEmailAccount)
private class ConstructEmailAccount extends org.yaml.snakeyaml.constructor.AbstractConstruct {
def construct(node: org.yaml.snakeyaml.nodes.Node): Object = {
// TODO: This is fine for quick prototyping in a REPL, but in a real
// application you should probably add type checks.
val mnode = node.asInstanceOf[org.yaml.snakeyaml.nodes.MappingNode]
val mapping = constructMapping(mnode)
val name = mapping.get("accountName").asInstanceOf[String]
new EmailAccount(name)
}
}
}
You can save this as a file and load it in the REPL using :load filename.scala.
Bonus advantage of this solution is that it can create immutable case class instances directly. Unfortunately Scala REPL seems to have issues with imports, so I've used fully qualified names.
Don't use JavaBeans
You can also just parse YAML documents as simple Java maps:
scala> val yaml2 = new Yaml()
yaml2: org.yaml.snakeyaml.Yaml = Yaml:1141996301
scala> val e2 = yaml2.load(text)
e2: Object = {accountName=Ymail Account}
scala> val map = e2.asInstanceOf[java.util.Map[String, Any]]
map: java.util.Map[String,Any] = {accountName=Ymail Account}
scala> map.get("accountName")
res4: Any = Ymail Account
This way SnakeYAML won't need to use reflection.
However, since you're using Scala, I recommend trying
MoultingYAML, which is a Scala wrapper for SnakeYAML. It parses YAML documents to simple Java types and then maps them to Scala types (even your own types like EmailAccount).

Non-scala source positions with scala macros

I have a scala macro which depends on an arbitrary xml file that is specified through a static string containing it's location.
def myMacro(path: String) = macro myMacroImpl
def myMacroImpl(c: Context)(path: c.Expr[String]): c.Expr[Any] = {
// load file specified by path and generate some code
...
}
This means, that if the xml file is malformed the macro will not be able to expand. In the moment I am providing an error message that contains a textual representation of the location of the error in the xml file. This however is obviously not the nicest solution.
Is it possible to provide source locations in different (possible non-scala) files for my generated code so that errors will point to the xml file instead of the scala file where the xml file is included? I don't see how I can create locations myself instead of altering existing.
This use case is definitely very interesting, and it looks like something that should be supported in the reflection API. Unfortunately, at the moment, there's no public API to achieve this, even though the internal, albeit quite low-level, machinery is in place.
import scala.language.experimental.macros
import scala.reflect.macros.blackbox.Context
import scala.reflect.io.AbstractFile
import scala.reflect.internal.util.BatchSourceFile
import scala.reflect.internal.util.OffsetPosition
class Impl(val c: Context) {
def impl: c.Tree = {
val filePath = "foo.txt"
val af = AbstractFile.getFile(filePath)
val content = scala.io.Source.fromFile(filePath).mkString
val sf = new BatchSourceFile(af, content)
val pos = new OffsetPosition(sf, 3).asInstanceOf[c.universe.Position]
c.abort(pos, "it works")
}
}
object Macros {
def foo: Any = macro Impl.impl
}
object Test extends App {
Macros.foo
}
Running this code on a simple text file produces the following result:
20:56 ~/Projects/Master/sandbox (master)$ cat foo.txt
hello
world
20:56 ~/Projects/Master/sandbox (master)$ scalac Test.scala
foo.txt:1: error: it works
hello
^
one error found
Please note that this solution involves scala.reflect.internal and a cast (both of which invalidate all compatibility guarantees that we provide for scala-reflect.jar), so it's not something that I would recommend for production code.

Program works when run with scala, get compile errors when try to compile it with scalac

I am testing the code below, does a basic database query. It works fine when I run it from the CLI using "scala dbtest.scala", but gives me compile errors when I try to compile it with scalac :
[sean#ibmp2 pybackup]$ scalac dbtest.scala
dbtest.scala:5: error: expected class or object definition
val conn_str = "jdbc:mysql://localhost:3306/svn?user=svn&password=svn"
^
dbtest.scala:8: error: expected class or object definition
classOf[com.mysql.jdbc.Driver]
^
dbtest.scala:11: error: expected class or object definition
val conn = DriverManager.getConnection(conn_str)
^
dbtest.scala:12: error: expected class or object definition
try {
^
four errors found
import java.sql.{Connection, DriverManager, ResultSet};
import java.util.Date
// Change to Your Database Config
val conn_str = "jdbc:mysql://localhost:3306/svn?user=xx&password=xx"
// Load the driver
classOf[com.mysql.jdbc.Driver]
// Setup the connection
val conn = DriverManager.getConnection(conn_str)
try {
// Configure to be Read Only
val statement = conn.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY)
// Execute Query
val rs = statement.executeQuery("SELECT * FROM backup")
// Iterate Over ResultSet
var svnFiles = Set[String]()
while (rs.next) {
val repos = rs.getString("repos")
val lm = rs.getDate("lastModified")
val lb = rs.getDate("lastBackedup")
if (lm.getTime() > lb.getTime()) {
println(repos + " needs backing up")
svnFiles += repos
}
else {
println(repos + " doesn't need backing up")
}
}
println(svnFiles)
}
finally {
conn.close
}
You need either a class, object, or trait at the top level to make it a legal source to compile. scala interpreter expects definitions and expressions, whereas scalac expects something that can turn into Java .class files.
//imports here
object DbTest {
def main(args: Array[String]) {
// your code here
}
}
Create a file called HelloWorld.scala, and enter the following:
object HelloWorld {
def main(args: Array[String]){
println("Hello World")
}
}
To compile the example, we use scalac, the Scala compiler. scalac works like most compilers: it takes a source file as argument, maybe some options, and produces one or several object files. The object files it produces are standard Java class files.
From the command line, run:
scalac HelloWorld.scala
This will generate a few class files in the current directory. One of them will be called HelloWorld.class, and contains a class which can be directly executed using the scala command.
Once compiled, a Scala program can be run using the scala command. Its usage is very similar to the java command used to run Java programs, and accepts the same options. The above example can be executed using the following command, which produces the expected output:
Now run:
scala HelloWorld.scala
Now "Hello World", will be printed to the console.
After researching this functionality, I found an article, which explains this in detail, and posted that information here on SO to help programmers understand this aspect of Scala development.
Source: http://docs.scala-lang.org/tutorials/scala-for-java-programmers.html