Scala Ambiguous Variable Name Within A Method - scala

I've seen some questions regarding Scala and variable scoping (such as Scala variable scoping question)
However, I'm having trouble getting my particular use-case to work.
Let's say I have a trait called Repo:
trait Repo {
val source: String
}
And then I have a method to create an implementation of Repo...
def createRepo(source: String) =
new Repo {
val source: String = source
}
Of course I have two source variables in use, one at the method level and one inside of the Repo implementation. How can I refer to the method-level source from within my Repo definition?
Thanks!

Not sure if this is the canonical way, but it works:
def createRepo(source: String) = {
val sourceArg = source
new Repo {
val source = sourceArg
}
}
Or, you could just give your paramenter a different name that doesn't clash.
Or, make a factory:
object Repo {
def apply(src: String) = new Repo { val source = src }
}
def createRepo(source: String) = Repo(source)

In addition to the solutions of Luigi, you might also consider changing Repo from a trait to a class,
class Repo(val source: String)
def createRepo(source: String) = new Repo(source)

Related

Get classname of the running Databricks Job

There is an Apache Spark Scala project (runnerProject) that uses another one in the same package (sourceProject). The aim of the source project is to get the name and version of the Databricks job that is running.
The problem with the following method is that when it is called from the runnerProject, it returns the sourceProject's details, not the runnerProject's name and version.
sourceProject's method:
class EnvironmentInfo(appName: String) {
override def getJobDetails(): (String, String) = {
val classPackage = getClass.getPackage
val jobName = classPackage.getImplementationTitle
val jobVersion = classPackage.getImplementationVersion
(jobName, jobVersion)
}
}
runnerProject uses sourceProject as a package:
import com.sourceProject.environment.{EnvironmentInfo}
class runnerProject {
def start(
environment: EnvironmentInfo
): Unit = {
// using the returned parameters of the environment
}
How can this issue be worked around in a way that the getJobDetails() runs in the sourceProject, so that it can be called from other projects as well, not just the runnerProject. And also, it should return the details about the "caller" job.
Thank you in advance! :)
Try the following, it gets the calling class name from the stack trace, and uses that to get the actual class, and it's package.
class EnvironmentInfo(appName: String) {
override def getJobDetails(): (String, String) = {
val callingClassName = Thread.currentThread.getStackTrace()(2).getClassName
val classPackage = Class.forName(callingClassName).getPackage
val jobName = classPackage.getImplementationTitle
val jobVersion = classPackage.getImplementationVersion
(jobName, jobVersion)
}
}
It will work if you call it directly, it might give you the wrong package if you call it from within a lambda function.

How To Create Temporary Directory in Scala Unit Tests

In scala how can a unit test create a temporary directory to use as part of the testing?
I am trying to unit test a class which depends on a directory
class UsesDirectory(directory : java.io.File) {
...
}
I'm looking for something of the form:
class UsesDirectorySpec extends FlatSpec {
val tempDir = ??? //missing piece
val usesDirectory = UsesDirectory(tempDir)
"UsesDirectory" should {
...
}
}
Also, any comments/suggestions on appropriately cleaning up the resource after the unit testing is completed would be helpful.
Thank you in advance for your consideration and response.
Krzysztof's answer provides a good strategy for avoiding the need for temp directories in your tests altogether.
However if you do need UsesDirectory to work with real files, you can do something like the following to create a temporary directory:
import java.nio.file.Files
val tempDir = Files.createTempDirectory("some-prefix").toFile
Regarding cleanup, you could use the JVM shutdown hook mechanism to delete your temp files.
(java.io.File does provide deleteOnExit() method but it doesn't work on non-empty directories)
You could implement a custom shutdown hook using sys.addShutdownHook {}, and use Files.walk or Files.walkTree to delete the contents of your temp directory.
Also you may want to take a look at the better-files library, which provides a less verbose scala API for common files operations including File.newTemporaryDirectory() and file.walk()
File in Java is very cumbersome to test. There is no simple way to create some kind of virtual filesystem abstraction, which can be used for tests.
A cool way around it is to create some kind of wrapper, that can be used for stubbing and mocking.
For example:
trait FileOps { //trait which works as proxy for file
def getName(): String
def exists(): Boolean
}
object FileOps {
class FileOpsImpl(file: File) extends FileOps {
override def getName(): String = file.getName //delegate all methods you need
override def exists(): Boolean = file.exists()
}
implicit class FromFile(file: File) { //implicit method to convert File to FileOps
def toFileOps: FileOpsImpl = new FileOpsImpl(file)
}
}
Then you'd have to use it instead of File in your class:
class UsesDirectory(directory : FileOps) {
...
}
//maybe you can even create implicit conversion, but it's better to do it explicitly
val directory = new UserDirectory(file.toFileOps)
And what is benefit of that?
In your tests you can provide custom implementation of FileOps:
class UsesDirectorySpec extends FlatSpec {
val dummyFileOps = new FileOps {
override def getName(): String = "mock"
override def exists(): Boolean = true
}
//OR
val mockFileOps = mock[FileOps] //you can mock it easily since it's only trait
val usesDirectory = UsesDirectory(dummyFileOps)
"UsesDirectory" should {
...
}
}
If you use this or a similar approach, you don't even need to touch filesystem in your unit test.

What is Clone in Chisel

I am a new learner of Chisel. What is the purpose of Cloning in Chisel? I saw somewhere written, "it creates a shallow copy". Why do we need it?
Here are examples. Could you please elaborate it.
1)
class Valid[+T <: Data](gen: T) extends Bundle
{
val valid = Output(Bool())
val bits = Output(gen.chiselCloneType)//?????
def fire(): Bool = valid
override def cloneType: this.type = Valid(gen).asInstanceOf[this.type]
}
/** Adds a valid protocol to any interface */
object Valid {
def apply[T <: Data](gen: T): Valid[T] = new Valid(gen)
}
2)
class Packet(n: Int, w: Int) extends Bundle {
val address = UInt(Log2Up(n).W)
val payload = UInt(w.W)
override def cloneType: this.type =
new Packet(n, w).asInstanceOf[this.type]
}
Why cloneType is Override. Is it like an apply method in Scala or it just only updates the cloneType method in Bundle.
Thanks
A typical use case for bundles in Chisel is to create an instance with a particular set of parameters of a bundle then use that instance as a template. Using it as a template means creating a new instance that is the same type. In many cases Chisel can do the cloning automatically and the user does not need implement cloneType but currently limitations of scala and chisel (usually when the bundle has multiple parameters) chisel cannot figure out how to implement the copy and the developer must implement the clonetype manually. Recent developments in chisel will nearly eliminate the need to implement cloneType. This is part of release of 3.1.0 schedule for release this month. See the autoclonetype issue for details.

How to add a new Class in a Scala Compiler Plugin?

In a Scala Compiler Plugin, I'm trying to create a new class that implement a pre-existing trait. So far my code looks like this:
def trait2Impl(original: ClassDef, newName: String): ClassDef = {
val impl = original.impl
// Seems OK to have same self, but does not make sense to me ...
val self = impl.self
// TODO: implement methods ...
val body = impl.body
// We implement original
val parents = original :: impl.parents
val newImpl = treeCopy.Template(impl, parents, self, body)
val name = newTypeName(newName)
// We are a syntheic class, not a user-defined trait
val mods = (original.mods | SYNTHETIC) &~ TRAIT
val tp = original.tparams
val result = treeCopy.ClassDef(original, mods, name, tp, newImpl)
// Same Package?
val owner = original.symbol.owner
// New symbol. What's a Position good for?
val symbol = new TypeSymbol(owner, NoPosition, name)
result.setSymbol(symbol)
symbol.setFlag(SYNTHETIC)
symbol.setFlag(ABSTRACT)
symbol.resetFlag(INTERFACE)
symbol.resetFlag(TRAIT)
owner.info.decls.enter(symbol)
result
}
But it doesn't seem to get added to the package. I suspect that is because actually the package got "traversed" before the trait that causes the generation, and/or because the "override def transform(tree: Tree): Tree" method of the TypingTransformer can only return one Tree, for every Tree that it receives, so it cannot actually produce a new Tree, but only modify one.
So, how do you add a new Class to an existing package? Maybe it would work if I transformed the package when "transform(Tree)" gets it, but I that point I don't know the content of the package yet, so I cannot generate the new Class this early (or could I?). Or maybe it's related to the "Position" parameter of the Symbol?
So far I found several examples where Trees are modified, but none where a completely new Class is created in a Compiler Plugin.
The full source code is here: https://gist.github.com/1794246
The trick is to store the newly created ClassDefs and use them when creating a new PackageDef. Note that you need to deal with both Symbols and trees: a package symbol is just a handle. In order to generate code, you need to generate an AST (just like for a class, where the symbol holds the class name and type, but the code is in the ClassDef trees).
As you noted, package definitions are higher up the tree than classes, so you'd need to recurse first (assuming you'll generate the new class from an existing class). Then, once the subtrees are traversed, you can prepare a new PackageDef (every compilation unit has a package definition, which by default is the empty package) with the new classes.
In the example, assuming the source code is
class Foo {
def foo {
"spring"
}
}
the compiler wraps it into
package <empty> {
class Foo {
def foo {
"spring"
}
}
}
and the plugin transforms it into
package <empty> {
class Foo {
def foo {
"spring"
}
}
package mypackage {
class MyClass extends AnyRef
}
}

What is the correct way to specify type variance for methods in a companion object

For me one of the more confusing aspects of the Scala type system is understanding covariance, contravariance, type bounds etc.
I am trying to create a generic Repository trait that can be extended by companion objects objects of classes that extend a Page trait. The idea is that the companion object will be responsible for creating new instances etc. These page instances will need to be cleaned up if they haven't been accessed within some period of time. Thus the base Repository trait will register them in a list of repositories that can be checked in a background actor thread.
Below is a stripped down version of the code. I'm getting a type mismatch error on the call to register(pages). The compiler found HashMap[String, T] but is expecting HashMap[String, Page]. I can't figure out what to do to make the compiler happy. I can define the register method as def register[T <: Page](repo: HashMap[String, T) ... but that just defers the problem to the reference of the var repos which I cannot qualify generically. I would appreciate it if someone could demonstrate the correct way to specify the types.
EDIT I can get it to work if I declare the hashmap as HashMap[String, Page] and then cast the page value retrieved from the hashmap with page.asInstanceOf[String, T]. Is there a way to avoid the cast?
trait Page {
val id = Random.hex(8)
private var lastAccessed = new Date
...
}
object Page {
import scala.collection.mutable.HashMap
trait Repository[T <: Page] {
private val pages = new HashMap[String, T]
register(pages)
def newPage: T
def apply(): T = {
val page = newPage
pages(page.id) = page
page
}
def apply(id: String): T = {
pages.get(id) match {
case Some(page) =>
page.lastAccessed = now
page
case None =>
this()
}
}
...
}
private var repos: List[HashMap[String, Page]] = Nil
private def register(repo: HashMap[String, Page]) {
repos = repo :: repos
}
...
}
class CoolPage extends Page
object CoolPage extends Page.Repository[CoolPage] {
def newPage = new CoolPage
}
val p = CoolPage()
First thing to note is that mutable HashMap is invariant: class HashMap [A, B]. Though the immutable version is covariant on values: class HashMap [A, +B].
Second thing to note is that your repos variable is meant to be a polymorphic collection which means that some compile time type information is lost when you put stuff there.
But since you use the mutable HashMap, the repos can't actually be a correct polymorphic collection because of HashMap invariance. To illustrate why let's suppose Page is a class (so we can instantitate it) and we put a HashMap[String, CoolPage] in the repos list. Then we could do this:
val m = repos.head // HashMap[String, Page]
m.put("12345678", new Page) // We just added a Page to HashMap[String, CoolPage]
So the compiler gives you an error to protect you from this.
I guess you can fix your code by making Repository covariant:
trait Repository[+T <: Page] {
private[this] val pages = new HashMap[String, T]
register(this)
def newPage: T
def apply(): T = {
val page = newPage
pages(page.id) = page
page
}
def apply(id: String): T = {
pages.get(id) match {
case Some(page) =>
page.lastAccessed = new Date
page
case None =>
this()
}
}
}
And changing repos to be a list of Repository[Page]:
private var repos: List[Repository[Page]] = Nil
private def register(repo: Repository[Page]) {
repos = repo :: repos
}
And remember that polymorphic collections (like repos) make you lose compile time type information of elements: if you put a Repository[CoolPage] there you only get Repository[Page] back and have to deal with it.
update: removed .asInstance[T] from the Repository code by making pages private[this].