Is it possible to configure a Scala interpreter (tools.nsc.IMain) so that it "forgets" the previously executed code, whenever I run the next interpret() call?
Normally when it compiles the sources, it wraps them in nested objects, so all the previously defined variables, functions and bindings are available.
It would suffice to not generate the nested objects (or to throw them away), although I would prefer a solution which would even remove the previously compiled classes from the class loader again.
Is there a setting, or a method, or something I can overwrite, or an alternative to IMain that would accomplish this? I need to be able to still access the resulting objects / classes from the host VM.
Basically I want to isolate subsequent interpret() calls without something as heavy weight as creating a new IMain for each iteration.
Here is one possible answer. Basically there is method reset() which calls the following things (mostly private, so either you buy the whole package or not):
clearExecutionWrapper()
resetClassLoader()
resetAllCreators()
prevRequests.clear()
referencedNameMap.clear()
definedNameMap.clear()
virtualDirectory.clear()
In my case, I am using a custom execution wrapper, so that needs to be set up again, and also imports are handled through a regular interpret cycle, so either add them again, or—better—just prepend them with the execution wrapper.
I would like to keep my bindings, they are also gone:
import tools.nsc._
import interpreter.IMain
object Test {
private final class Intp(cset: nsc.Settings)
extends IMain(cset, new NewLinePrintWriter(new ConsoleWriter, autoFlush = true)) {
override protected def parentClassLoader = Test.getClass.getClassLoader
}
object Foo {
def bar() { println("BAR" )}
}
def run() {
val cset = new nsc.Settings()
cset.classpath.value += java.io.File.pathSeparator + sys.props("java.class.path")
val i = new Intp(cset)
i.initializeSynchronous()
i.bind[Foo.type]("foo", Foo)
val res0 = i.interpret("foo.bar(); val x = 33")
println(s"res0: $res0")
i.reset()
val res1 = i.interpret("println(x)")
println(s"res1: $res1")
i.reset()
val res2 = i.interpret("foo.bar()")
println(s"res2: $res2")
}
}
This will find Foo in the first iteration, correctly forget x in the second iteration, but then in the third iteration, it can be seen that the foo binding is also lost:
foo: Test.Foo.type = Test$Foo$#8bf223
BAR
x: Int = 33
res0: Success
<console>:8: error: not found: value x
println(x)
^
res1: Error
<console>:8: error: not found: value foo
foo.bar()
^
res2: Error
The following seems to be fine:
for(j <- 0 until 3) {
val user = "foo.bar()"
val synth = """import Test.{Foo => foo}
""".stripMargin + user
val res = i.interpret(synth)
println(s"res$j: $res")
i.reset()
}
Related
I'm looking to call the ATR function from this scala wrapper for ta-lib. But I can't figure out how to use wrapper correctly.
package io.github.patceev.talib
import com.tictactec.ta.lib.{Core, MInteger, RetCode}
import scala.concurrent.Future
object Volatility {
def ATR(
highs: Vector[Double],
lows: Vector[Double],
closes: Vector[Double],
period: Int = 14
)(implicit core: Core): Future[Vector[Double]] = {
val arrSize = highs.length - period + 1
if (arrSize < 0) {
Future.successful(Vector.empty[Double])
} else {
val begin = new MInteger()
val length = new MInteger()
val result = Array.ofDim[Double](arrSize)
core.atr(
0, highs.length - 1, highs.toArray, lows.toArray, closes.toArray,
period, begin, length, result
) match {
case RetCode.Success =>
Future.successful(result.toVector)
case error =>
Future.failed(new Exception(error.toString))
}
}
}
}
Would someone be able to explain how to use function and print out the result to the console.
Many thanks in advance.
Regarding syntax, Scala is one of many languages where you call functions and methods passing arguments in parentheses (mostly, but let's keep it simple for now):
def myFunction(a: Int): Int = a + 1
myFunction(1) // myFunction is called and returns 2
On top of this, Scala allows to specify multiple parameters lists, as in the following example:
def myCurriedFunction(a: Int)(b: Int): Int = a + b
myCurriedFunction(2)(3) // myCurriedFunction returns 5
You can also partially apply myCurriedFunction, but again, let's keep it simple for the time being. The main idea is that you can have multiple lists of arguments passed to a function.
Built on top of this, Scala allows to define a list of implicit parameters, which the compiler will automatically retrieve for you based on some scoping rules. Implicit parameters are used, for example, by Futures:
// this defines how and where callbacks are run
// the compiler will automatically "inject" them for you where needed
implicit val ec: ExecutionContext = concurrent.ExecutionContext.global
Future(4).map(_ + 1) // this will eventually result in a Future(5)
Note that both Future and map have a second parameter list that allows to specify an implicit execution context. By having one in scope, the compiler will "inject" it for you at the call site, without having to write it explicitly. You could have still done it and the result would have been
Future(4)(ec).map(_ + 1)(ec)
That said, I don't know the specifics of the library you are using, but the idea is that you have to instantiate a value of type Core and either bind it to an implicit val or pass it explicitly.
The resulting code will be something like the following
val highs: Vector[Double] = ???
val lows: Vector[Double] = ???
val closes: Vector[Double] = ???
implicit val core: Core = ??? // instantiate core
val resultsFuture = Volatility.ATR(highs, lows, closes) // core is passed implicitly
for (results <- resultsFuture; result <- results) {
println(result)
}
Note that depending on your situation you may have to also use an implicit ExecutionContext to run this code (because you are extracting the Vector[Double] from a Future). Choosing the right execution context is another kind of issue but to play around you may want to use the global execution context.
Extra
Regarding some of the points I've left open, here are some pointers that hopefully will turn out to be useful:
Operators
Multiple Parameter Lists (Currying)
Implicit Parameters
Scala Futures
I am trying to use a macro to eliminate the need for scala to construct a downward-passed function object. This code gets used in inner-loops of our system, and we don't want the inner loop to just allocate objects endlessly. This is creating performance problems for us.
Our original code was this:
dis.withBitLengthLimit(newLimit){... body ...}
And the body was a function that was passed in as a function object.
The problem I have is that the original non-macro version refers to 'this'. My workaround below is to make each place the macro is called pass the 'this' object as another argument. i.e., ugly like:
dis.withBitLengthLimit(dis, newLimit){... body ...}
It's not awful, but sure seems like passing dis should be unnecessary.
Is there a cleaner way?
Here's the macro below.
object IOMacros {
/**
* Used to temporarily vary the bit length limit.
*
* Implementing as a macro eliminates the creation of a downward function object every time this
* is called.
*
* ISSUE: this macro really wants to use a self reference to `this`. But when a macro is expanded
* the object that `this` represents changes. Until a better way to do this comes about, we have to pass
* the `this` object to the `self` argument, which makes calls look like:
* dis.withBitLengthLimit(dis, newLimit){... body ...}
* That looks redundant, and it is, but it's more important to get the allocation of this downward function
* object out of inner loops.
*/
def withBitLengthLimitMacro(c: Context)(self: c.Tree, lengthLimitInBits: c.Tree)(body: c.Tree) = {
import c.universe._
q"""{
import edu.illinois.ncsa.daffodil.util.MaybeULong
val ___dStream = $self
val ___newLengthLimit = $lengthLimitInBits
val ___savedLengthLimit = ___dStream.bitLimit0b
if (!___dStream.setBitLimit0b(MaybeULong(___dStream.bitPos0b + ___newLengthLimit))) false
else {
try {
$body
} finally {
___dStream.resetBitLimit0b(___savedLengthLimit)
}
true
}
}"""
}
The prefix method on Context provides access to the expression that the macro method is called on, which should allow you to accomplish what you're trying to do. Here's a quick example of how you can use it:
import scala.language.experimental.macros
import scala.reflect.macros.blackbox.Context
class Foo(val i: Int) {
def bar: String = macro FooMacros.barImpl
}
object FooMacros {
def barImpl(c: Context): c.Tree = {
import c.universe._
val self = c.prefix
q"_root_.scala.List.fill($self.i + $self.i)(${ self.tree.toString }).mkString"
}
}
And then:
scala> val foo = new Foo(3)
foo: Foo = Foo#6fd7c13e
scala> foo.bar
res0: String = foofoofoofoofoofoo
Note that there are some issues you need to be aware of. prefix gives you the expression, which may not be a variable name:
scala> new Foo(2).bar
res1: String = new Foo(2)new Foo(2)new Foo(2)new Foo(2)
This means that if the expression has side effects, you have to take care not to include it in the result tree more than once (assuming you don't want them to happen multiple times):
scala> new Qux(1).bar
hey
hey
res2: String = new Qux(1)new Qux(1)
Here the constructor is called twice since we include the prefix expression in the macro's result twice. You can avoid this by defining a temporary variable in the macro:
object FooMacros {
def barImpl(c: Context): c.Tree = {
import c.universe._
val tmp = TermName(c.freshName)
val self = c.prefix
q"""
{
val $tmp = $self
_root_.scala.List.fill($tmp.i + $tmp.i)(${ self.tree.toString }).mkString
}
"""
}
}
And then:
scala> class Qux(i: Int) extends Foo(i) { println("hey") }
defined class Qux
scala> new Qux(1).bar
hey
res3: String = new Qux(1)new Qux(1)
Note that this approach (using freshName) is a lot better than just prefixing local variables in the macro with a bunch of underscores, which can cause problems if you include an expression that happens to contain a variable with the same name.
(Update about that last paragraph: actually I don't remember for sure if you can get yourself into problems with local variable names shadowing names that might be used in included trees. I avoid it myself, but I can't manufacture an example of it causing problems at the moment, so it might be fine.)
I want to write a simple Scala script that runs some methods that are defined in another file.
Each line that runs this method requires information that won't be available until runtime. For simplicity sakes, I want to abstract that portion out.
Thus, I want to use Currying to get the result of each line in the script, then run the result again with the extra data.
object TestUtil {
// "control" is not known until runtime
def someTestMethod(x: Int, y: Int)(control: Boolean): Boolean = {
if (control) {
assert(x == y)
x == y
} else {
assert(x > y)
x > y
}
}
}
someTestMethod is defined in my primary codebase.
// testScript.sc
import <whateverpath>.TestUtil
TestUtil.someTestMethod(2,1)
TestUtil.someTestMethod(5,5)
Each line should return a function, that I need to rerun with a Boolean.
val control: Boolean = true
List[(Boolean) -> Boolean) testFuncs = runFile("testScript.sc")
testFuncs.foreach(_(control)) // Run all the functions that testScripts defined
(Sorry if this is a weird example, it's the simplest thing I can think of)
So far I have figured out how to parse the script, and get the Tree. However at that point I can't figure out how to execute each individual tree object and get the result. This is basically where I'm stuck!
val settings = new scala.tools.nsc.GenericRunnerSettings(println)
settings.usejavacp.value = true
settings.nc.value = true
val interpreter: IMain = new IMain(settings)
val treeResult: Option[List[Tree]] = interpreter.parse(
"""true
| 5+14""".stripMargin)
treeResult.get.foreach((tree: Tree) => println(tree))
the result of which is
true
5.$plus(14)
where 5 + 14 has not been evaluated, and I can't figure out how, or if this is even a worthwhile route to pursure
In the worst case you could toString your modified tree and then call interpreter.interpret:
val results = treeResult.get.map { tree: Tree => interpreter.interpret(tree.toString) }
It seems like it would be better to have the other file evaluate to something that can then be interpreted passing control (e.g. using the scalaz Reader Monad and for/yield syntax). But I assume you have good reasons for wanting to do this via an extra scala interpreter.
I ran into this while using a PDF library, but there have been plenty of other occasions in which I would have something like this useful.
There are many situations in which you have a resource (that needs to be closed) and you use these resources for obtaining objects that are only valid as long as the resource is open and hasn't been released yet.
Let's say the b reference in the code below is only valid while a is open:
val a = open()
try {
val b = a.someObject()
} finally {
a.close()
}
Now, this code is fine, but this code isn't:
val b = {
val a = open()
try {
a.someObject()
} finally {
a.close()
}
}
With that code I would have a reference to something of resource a, while a is no longer open.
Ideally I'd like to have something like this:
// Nothing producing an instance of A yet, but just capturing the way A needs
// to be opened.
a = Safe(open()) // Safe[A]
// Just building a function that opens a and extracts b, returning a Safe[B]
val b = a.map(_.someObject()) // Safe[B]
// Shouldn't compile since B is not safe to extract without being in the scope
// of an open A.
b.extract
// The c variable will hold something that is able to exist outside the scope of
// an open A.
val c = b.map(_.toString)
// So this should compile
c.extract
In your example, it is typical that a exception is thrown when you access a stream that is already closed. There exists util.Try, which is made exactly for this use case:
scala> import scala.util._
import scala.util._
scala> val s = Try(io.Source.fromFile("exists"))
s: scala.util.Try[scala.io.BufferedSource] = Success(non-empty iterator)
// returns a safe value
scala> s.map(_.getLines().toList)
res21: scala.util.Try[List[String]] = Success(List(hello))
scala> s.map(_.close())
res22: scala.util.Try[Unit] = Success(())
scala> val data = s.map(_.getLines().toList)
data: scala.util.Try[List[String]] = Failure(java.io.IOException: Stream Closed)
// not safe anymore, thus you won't get access to the data with map
scala> data.map(_.length)
res24: scala.util.Try[Int] = Failure(java.io.IOException: Stream Closed)
Like other monads, Try gives you a compile time guarantee to not access the wrapped value directly: you have to compose higher order functions to operate on its value.
Am I right understanding that
def is evaluated every time it gets accessed
lazy val is evaluated once it gets accessed
val is evaluated once it gets into the execution scope?
Yes, but there is one nice trick: if you have lazy value, and during first time evaluation it will get an exception, next time you'll try to access it will try to re-evaluate itself.
Here is example:
scala> import io.Source
import io.Source
scala> class Test {
| lazy val foo = Source.fromFile("./bar.txt").getLines
| }
defined class Test
scala> val baz = new Test
baz: Test = Test#ea5d87
//right now there is no bar.txt
scala> baz.foo
java.io.FileNotFoundException: ./bar.txt (No such file or directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:137)
...
// now I've created empty file named bar.txt
// class instance is the same
scala> baz.foo
res2: Iterator[String] = empty iterator
Yes, though for the 3rd one I would say "when that statement is executed", because, for example:
def foo() {
new {
val a: Any = sys.error("b is " + b)
val b: Any = sys.error("a is " + a)
}
}
This gives "b is null". b is never evaluated and its error is never thrown. But it is in scope as soon as control enters the block.
I would like to explain the differences through the example that i executed in REPL.I believe this simple example is easier to grasp and explains the conceptual differences.
Here,I am creating a val result1, a lazy val result2 and a def result3 each of which has a type String.
A). val
scala> val result1 = {println("hello val"); "returns val"}
hello val
result1: String = returns val
Here, println is executed because the value of result1 has been computed here. So, now result1 will always refer to its value i.e "returns val".
scala> result1
res0: String = returns val
So, now, you can see that result1 now refers to its value. Note that, the println statement is not executed here because the value for result1 has already been computed when it was executed for the first time. So, now onwards, result1 will always return the same value and println statement will never be executed again because the computation for getting the value of result1 has already been performed.
B). lazy val
scala> lazy val result2 = {println("hello lazy val"); "returns lazy val"}
result2: String = <lazy>
As we can see here, the println statement is not executed here and neither the value has been computed. This is the nature of lazyness.
Now, when i refer to the result2 for the first time, println statement will be executed and value will be computed and assigned.
scala> result2
hello lazy val
res1: String = returns lazy val
Now, when i refer to result2 again, this time around, we will only see the value it holds and the println statement wont be executed. From now on, result2 will simply behave like a val and return its cached value all the time.
scala> result2
res2: String = returns lazy val
C). def
In case of def, the result will have to be computed everytime result3 is called. This is also the main reason that we define methods as def in scala because methods has to compute and return a value everytime it is called inside the program.
scala> def result3 = {println("hello def"); "returns def"}
result3: String
scala> result3
hello def
res3: String = returns def
scala> result3
hello def
res4: String = returns def
One good reason for choosing def over val, especially in abstract classes (or in traits that are used to mimic Java's interfaces), is, that you can override a def with a val in subclasses, but not the other way round.
Regarding lazy, there are two things I can see that one should have in mind. The first is that lazy introduces some runtime overhead, but I guess that you would need to benchmark your specific situation to find out whether this actually has a significant impact on the runtime performance. The other problem with lazy is that it possibly delays raising an exception, which might make it harder to reason about your program, because the exception is not thrown upfront but only on first use.
You are correct. For evidence from the specification:
From "3.3.1 Method Types" (for def):
Parameterless methods name expressions that are re-evaluated each time
the parameterless method name is referenced.
From "4.1 Value Declarations and Definitions":
A value definition val x : T = e defines x as a name of the value that results from
the evaluation of e.
A lazy value definition evaluates its right hand side e the first
time the value is accessed.
def defines a method. When you call the method, the method ofcourse runs.
val defines a value (an immutable variable). The assignment expression is evaluated when the value is initialized.
lazy val defines a value with delayed initialization. It will be initialized when it's first used, so the assignment expression will be evaluated then.
A name qualified by def is evaluated by replacing the name and its RHS expression every time the name appears in the program. Therefore, this replacement will be executed every where the name appears in your program.
A name qualified by val is evaluated immediately when control reaches its RHS expression. Therefore, every time the name appears in the expression, it will be seen as the value of this evaluation.
A name qualified by lazy val follows the same policy as that of val qualification with an exception that its RHS will be evaluated only when the control hits the point where the name is used for the first time
Should point out a potential pitfall in regard to usage of val when working with values not known until runtime.
Take, for example, request: HttpServletRequest
If you were to say:
val foo = request accepts "foo"
You would get a null pointer exception as at the point of initialization of the val, request has no foo (would only be know at runtime).
So, depending on the expense of access/calculation, def or lazy val are then appropriate choices for runtime-determined values; that, or a val that is itself an anonymous function which retrieves runtime data (although the latter seems a bit more edge case)