Get filename of the current file in scala - scala

Is there a way the file name of the current file (when the code is written) in scala?
Like my class is in a file like com/mysite/app/myclass.scala and i want to call a method that will return "myclass.scala" (or the full path...)
Thank you!

This can be achieved with Scala macros, an experimental language feature available from version 2.10.
Macros make it possible to interact with the building of the AST during the source code parsing phase, and to modify trees of AST before the actual compilation is performed.
Since the information about the context of the compilation is available to the macro though the Context object, during the parsing phase it is possible to retrieve the source file name and to return it as a String literal, to be inserted in place of the macro call in the AST.
What follows is a working example of returning the source file name. The example is divided in two files:
A source file where the macro is defined and implemented:
// Contents of: "Macros.scala"
import scala.reflect.macros.Context
import scala.language.experimental.macros
object Macros {
def sourceFile: String = macro sourceFileImpl
def sourceFileImpl(c: Context) = {
import c.universe._
c.Expr[String](Literal(Constant(c.enclosingUnit.source.path.toString)))
}
}
Another source file where the macro is used:
// Contents of: "Main.scala"
object Main extends App {
val fileName = Macros.sourceFile
println(fileName)
}
The macro implementation and the code using it must be in different source files. The file name returned is the correct one, i.e. the name of the source file with macro call.

Related

Can application config be loaded in a scala macro?

I'm attempting to write a macro that depends on some information in my Play application's configuration. I'd like to use some configuration generate the tree in the macro implementation.
When I attempt to load that configuration in the macro, I see an error that no configuration setting was found:
Error:(80, 16) exception during macro expansion:
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'auth-service'
at com.typesafe.config.impl.SimpleConfig.findKeyOrNull(SimpleConfig.java:156)
at com.typesafe.config.impl.SimpleConfig.findOrNull(SimpleConfig.java:174)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:188)
at com.typesafe.config.impl.SimpleConfig.find(SimpleConfig.java:193)
at com.typesafe.config.impl.SimpleConfig.getObject(SimpleConfig.java:268)
at com.typesafe.config.impl.SimpleConfig.getObject(SimpleConfig.java:41)
at mac.MyMacro$.mImpl(MyMacro.scala:16)
MyMacro.m()
When using the same config loading code in the test case, everything loads fine.
My macro looks like this:
object MyMacro {
def m(): List[Int] = macro MyMacro.mImpl
def mImpl(c: Context)() = {
import c.universe._
ConfigFactory.load().getObject("auth-service") // this fails
q"""
List(1, 2)
"""
}
}
And the test that's attempting to execute it looks like this:
"test macro" in {
ConfigFactory.load().getObject("auth-service") // this succeeds
MyMacro.m()
}
Can you please help me understand why the application config isn't being loaded or isn't available within the macro? If it's not possible to load config this way, what is a common way to solve a problem like this where a macro depends on some declared configuration?
Macros need to be defined in a separate project; ConfigFactory.load() in that project will look for configuration files in the classpath of that project, and not of the project which uses the macro. So if you can, the part of the configuration used by the macro should live in that project.
Alternatively, you can use one of the ConfigFactory.parseFile() overloads to pass a specific file, but then your macro needs to know the path to the application.

Sealed trait and dynamic case objects

I have a few enumerations implemented as sealed traits and case objects. I prefer using the ADT approach because of the non-exhaustive warnings and mostly because we want to avoid the type erasure. Something like this
sealed abstract class Maker(val value: String) extends Product with Serializable {
override def toString = value
}
object Maker {
case object ChryslerMaker extends Vendor("Chrysler")
case object ToyotaMaker extends Vendor("Toyota")
case object NissanMaker extends Vendor("Nissan")
case object GMMaker extends Vendor("General Motors")
case object UnknownMaker extends Vendor("")
val tipos = List(ChryslerMaker, ToyotaMaker, NissanMaker,GMMaker, UnknownMaker)
private val fromStringMap: Map[String, Maker] = tipos.map(s => s.toString -> s).toMap
def apply(key: String): Option[Maker] = fromStringMap.get(key)
}
This is working well so far, now we are considering providing access to other programmers to our code to allow them to configure on site. I see two potential problems:
1) People messing up and writing things like:
case object ChryslerMaker extends Vendor("Nissan")
and people forgetting to update the tipos
I have been looking into using a configuration file (JSON or csv) to provide these values and read them as we do with plenty of other elements, but all the answers I have found rely on macros and seem to be extremely dependent on the scala version used (2.12 for us).
What I would like to find is:
1a) (Prefered) a way to create dynamically the case objects from a list of strings making sure the objects are named consistently with the value they hold
1b) (Acceptable) if this proves too hard a way to obtain the objects and the values during the test phase
2) Check that the number of elements in the list matches the number of case objects created.
I forgot to mention, I have looked briefly to enumeratum but I would prefer not to include additional libraries unless i really understand the pros and cons (and right now I am not sure how enumerated compares with the ADT approach, if you think this is the best way and can point me to such discussion that would work great)
Thanks !
One idea that comes to my mind is to create an SBT SourceGenerator task.
That will read an input JSON, CSV, XML or whatever file, that is part of your project and will create a scala file.
// ----- File: project/VendorsGenerator.scala -----
import sbt.Keys._
import sbt._
/**
* An SBT task that generates a managed source file with all Scalastyle inspections.
*/
object VendorsGenerator {
// For demonstration, I will use this plain List[String] to generate the code,
// you may change the code to read a file instead.
// Or maybe this will be good enough.
final val vendors: List[String] =
List(
"Chrysler",
"Toyota",
...
"Unknow"
)
val generatorTask = Def.task {
// Make the 'tipos' List, which contains all vendors.
val tipos =
vendors
.map(vendorName => s"${vendorName}Vendor")
.mkString("val tipos: List[Vendor] = List(", ",", ")")
// Make a case object for each vendor.
val vendorObjects = vendors.map { vendorName =>
s"""case object ${vendorName}Vendor extends Vendor { override final val value: String = "${vendorName}" }"""
}
// Fill the code template.
val code =
List(
List(
"package vendors",
"sealed trait Vendor extends Product with Serializable {",
"def value: String",
"override final def toString: String = value",
"}",
"object Vendors extends (String => Option[Vendor]) {"
),
vendorObjects,
List(
tipos,
"private final val fromStringMap: Map[String, Vendor] = tipos.map(v => v.toString -> v).toMap",
"override def apply(key: String): Option[Vendor] = fromStringMap.get(key.toLowerCase)",
"}"
)
).flatten
// Save the new file to the managed sources dir.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala"
IO.writeLines(vendorsFile, code)
Seq(vendorsFile)
}
}
Now, you can activate your source generator.
This task will be run each time, before the compile step.
// ----- File: build.sbt -----
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue
Please note that I suggest this, because I have done it before and because I don't have any macros nor meta programming experience.
Also, note that this example relays a lot in Strings, which make the code a little bit hard to understand and maintain.
BTW, I haven't used enumeratum, but giving it a quick look looks like the best solution to this problem
Edit
I have my code ready to read a HOCON file and generate the matching code. My question now is where to place the scala file in the project directory and where will the files be generated. I am a little bit confused because there seems to be multiple steps 1) compile my scala generator, 2) run the generator, and 3) compile and build the project. Is this right?
Your generator is not part of your project code, but instead of your meta-project (I know that sounds confusing, you may read this for understanding that) - as such, you place the generator inside the project folder at the root level (the same folder where is the build.properties file for specifying the sbt version).
If your generator needs some dependencies (I'm sure it does for reading the HOCON) you place them in a build.sbt file inside that project folder.
If you plan to add unit test to the generator, you may create an entire scala project inside the meta-project (you may give a look to the project folder of a open source project (Yes, yes I know, confusing again) in which I work for reference) - My personal suggestion is that more than testing the generator itself, you should test the generated file instead, or better both.
The generated file will be automatically placed in the src_managed folder (which lives inside target and thus it is ignored from your source code version control).
The path inside that is just by order, as everything inside the src_managed folder is included by default when compiling.
val vendorsFile = (sourceManaged in Compile).value / "vendors.scala" // Path to the file to write.`
In order to access the values defined in the generated file on your source code, you only need to add a package to the generated file and import the values from that package in your code (as with any normal file).
You don't need to worry about anything related with compilation order, if you include your source generator in your build.sbt file, SBT will take care of everything automatically.
sourceGenerators in Compile += VendorsGenerator.generatorTask.taskValue // Activate the source generator.
SBT will re-run your generator everytime it needs to compile.
"BTW I get "not found: object sbt" on the imports".
If the project is inside the meta-project space, it will find the sbt package by default, don't worry about it.

Obtaining a SemanticDocument from a Scala source file using ScalaFix

What are the steps of parsing a Scala source file into a SemanticDocument using ScalaFix?
As of scalafix(0.9.4),
To write a SemanticRule , one has to extend the abstract class SemanticRule and this abstract class has a method fix with the following signature:
def fix(implicit doc: SemanticDocument): Patch
If we override this method to create a Patch either for fixing or linting, we will have access to implicit value doc. Scalafix populates this variable by parsing a single source file. doc represents a single source file.

Extracting the complete call graph of a scala project (tough one)

I would like to extract from a given Scala project, the call graph of all methods which are part of the project's own source.
As I understand, the presentation compiler doesn't enable that, and it requires going down all the way down to the actual compiler (or a compiler plugin?).
Can you suggest complete code, that would safely work for most scala projects but those that use the wackiest dynamic language features? for the call graph, I mean a directed (possibly cyclic) graph comprising class/trait + method vertices where an edge A -> B indicates that A may call B.
Calls to/from libraries should be avoided or "marked" as being outside the project's own source.
EDIT:
See my macro paradise derived prototype solution, based on #dk14's lead, as an answer below. Hosted on github at https://github.com/matanster/sbt-example-paradise.
Here's the working prototype, which prints the necessary underlying data to the console as a proof of concept. http://goo.gl/oeshdx.
How This Works
I have adapted the concepts from #dk14 on top boilerplate from macro paradise.
Macro paradise lets you define an annotation that will apply your macro over any annotated object in your source code. From there you have access to the AST that the compiler generates for the source, and scala reflection api can be used to explore the type information of the AST elements. Quasiquotes (the etymology is from haskell or something) are used to match the AST for the relevant elements.
More about Quasiquotes
The generally important thing to note is that quasiquotes work over an AST, but they are a strange-at-first-glance api and not a direct representation of the AST (!). The AST is picked up for you by paradise's macro annotation, and then quasiquotes are the tool for exploring the AST at hand: you match, slice and dice the abstract syntax tree using quasiquotes.
The practical thing to note about quasiquotes is that there are fixed quasiquote templates for matching each type of scala AST - a template for a scala class definition, a template for a scala method definition, etc. These tempaltes are all provided here, making it very simple to match and deconstruct the AST at hand to its interesting constituents. While the templates may look daunting at first glance, they are mostly just templates mimicking the scala syntax, and you may freely change the $ prepended variable names within them to names that feel nicer to your taste.
I still need to further hone the quasiquote matches I use, which currently aren't perfect. However, my code seems to produce the desired result for many cases, and honing the matches to 95% precision may be well doable.
Sample Output
found class B
class B has method doB
found object DefaultExpander
object DefaultExpander has method foo
object DefaultExpander has method apply
which calls Console on object scala of type package scala
which calls foo on object DefaultExpander.this of type object DefaultExpander
which calls <init> on object new A of type class A
which calls doA on object a of type class A
which calls <init> on object new B of type class B
which calls doB on object b of type class B
which calls mkString on object tags.map[String, Seq[String]](((tag: logTag) => "[".+(Util.getObjectName(tag)).+("]")))(collection.this.Seq.canBuildFrom[String]) of type trait Seq
which calls map on object tags of type trait Seq
which calls $plus on object "[".+(Util.getObjectName(tag)) of type class String
which calls $plus on object "[" of type class String
which calls getObjectName on object Util of type object Util
which calls canBuildFrom on object collection.this.Seq of type object Seq
which calls Seq on object collection.this of type package collection
.
.
.
It is easy to see how callers and callees can be correlated from this data, and how call targets outside the project's source can be filtered or marked out. This is all for scala 2.11. Using this code, one will need to prepend an annotation to each class/object/etc in each source file.
The challenges that remain are mostly:
Challenges remaining:
This crashes after getting the job done. Hinging on https://github.com/scalamacros/paradise/issues/67
Need to find a way to ultimately apply the magic to entire source files without manually annotating each class and object with the static annotation. This is rather minor for now, and admittedly, there are benefits for being able to control classes to include and ignore anyway. A preprocessing stage that implants the annotation before (almost) every top level source file definition, would be one nice solution.
Honing the matchers such that all and only relevant definitions are matched - to make this general and solid beyond my simplistic and cursory testing.
Alternative Approach to Ponder
acyclic brings to mind a quite opposite approach that still sticks to the realm of the scala compiler - it inspects all symbols generated for the source, by the compiler (as much as I gather from the source). What it does is check for cyclic references (see the repo for a detailed definition). Each symbol supposedly has enough information attached to it, to derive the graph of references that acyclic needs to generate.
A solution inspired by this approach may, if feasible, locate the parent "owner" of every symbol rather than focus on the graph of source files connections as acyclic itself does. Thus with some effort it would recover the class/object ownership of each method. Not sure if this design would not computationally explode, nor how to deterministically obtain the class encompassing each symbol.
The upside would be that there is no need for macro annotations here. The downside is that this cannot sprinkle runtime instrumentation as the macro paradise rather easily allows, which could be at times useful.
It requires more precise analysis, but as a start this simple macro will print all possible applyies, but it requires macro-paradise and all traced classess should have #trace annotation:
class trace extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro tracerMacro.impl
}
object tracerMacro {
def impl(c: Context)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
val inputs = annottees.map(_.tree).toList
def analizeBody(name: String, method: String, body: c.Tree) = body.foreach {
case q"$expr(..$exprss)" => println(name + "." + method + ": " + expr)
case _ =>
}
val output = inputs.head match {
case q"class $name extends $parent { ..$body }" =>
q"""
class $name extends $parent {
..${
body.map {
case x#q"def $method[..$tt] (..$params): $typ = $body" =>
analizeBody(name.toString, method.toString, body)
x
case x#q"def $method[..$tt]: $typ = $body" =>
analizeBody(name.toString, method.toString, body)
x
}
}
}
"""
case x => sys.error(x.toString)
}
c.Expr[Any](output)
}
}
Input:
#trace class MyF {
def call(param: Int): Int = {
call2(param)
if(true) call3(param) else cl()
}
def call2(oaram: Int) = ???
def cl() = 5
def call3(param2: Int) = ???
}
Output (as compiler's warnings, but you may output to file intead of println):
Warning:scalac: MyF.call: call2
Warning:scalac: MyF.call: call3
Warning:scalac: MyF.call: cl
Of course, you might want to c.typeCheck(input) it (as now expr.tpe on found trees is equals null) and find which class this calling method belongs to actually, so the resulting code may not be so trivial.
P.S. macroAnnotations give you unchecked tree (as it's on earlier compiler stage than regular macroses), so if you want something typechecked - the best way is surround the piece of code you want to typecheck with call of some your regular macro, and process it inside this macro (you can even pass some static parameters). Every regular macro inside tree produced by macro-annotation - will be executed as usual.
Edit
The basic idea in this answer was to bypass the (pretty complex) Scala compiler completely, and extract the graph from the generated .class files in the end. It appeared that a decompiler with sufficiently verbose output could reduce the problem to basic text manipulation. However, after a more detailed examination it turned out that this is not the case. One would just get back to square one, but with obfuscated Java code instead of the original Scala code. So this proposal does not really work, although there is some rationale behind working with the final .class files instead of intermediate structures used internally by the Scala compiler.
/Edit
I don't know whether there are tools out there that do it out of the box (I assume that you have checked that). I have only a very rough idea what the presentation compiler is. But if all that you want is to extract a graph with methods as nodes and potential calls of methods as edges, I have a proposal for a quick-and-dirty solution. This would work only if you want to use it for some sort of visualization, it doesn't help you at all if you want to perform some clever refactoring operations.
In case that you want to attempt building such a graph-generator yourself, it might turn out much simpler than you think. But for this, you need to go all the way down, even past the compiler. Just grab your compiled .class files, and use something like the CFR java decompiler on it.
When used on a single compiled .class file, CFR will generate list of classes that the current class depends on (here I use my little pet project as example):
import akka.actor.Actor;
import akka.actor.ActorContext;
import akka.actor.ActorLogging;
import akka.actor.ActorPath;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.ScalaActorRef;
import akka.actor.SupervisorStrategy;
import akka.actor.package;
import akka.event.LoggingAdapter;
import akka.pattern.PipeToSupport;
import akka.pattern.package;
import scala.Function1;
import scala.None;
import scala.Option;
import scala.PartialFunction;
...
(very long list with all the classes this one depends on)
...
import scavenger.backend.worker.WorkerCache$class;
import scavenger.backend.worker.WorkerScheduler;
import scavenger.backend.worker.WorkerScheduler$class;
import scavenger.categories.formalccc.Elem;
Then it will spit out some horribly looking code, that might look like this (small excerpt):
public PartialFunction<Object, BoxedUnit> handleLocalResponses() {
return SimpleComputationExecutor.class.handleLocalResponses((SimpleComputationExecutor)this);
}
public Context provideComputationContext() {
return ContextProvider.class.provideComputationContext((ContextProvider)this);
}
public ActorRef scavenger$backend$worker$MasterJoin$$_master() {
return this.scavenger$backend$worker$MasterJoin$$_master;
}
#TraitSetter
public void scavenger$backend$worker$MasterJoin$$_master_$eq(ActorRef x$1) {
this.scavenger$backend$worker$MasterJoin$$_master = x$1;
}
public ActorRef scavenger$backend$worker$MasterJoin$$_masterProxy() {
return this.scavenger$backend$worker$MasterJoin$$_masterProxy;
}
#TraitSetter
public void scavenger$backend$worker$MasterJoin$$_masterProxy_$eq(ActorRef x$1) {
this.scavenger$backend$worker$MasterJoin$$_masterProxy = x$1;
}
public ActorRef master() {
return MasterJoin$class.master((MasterJoin)this);
}
What one should notice here is that all methods come with their full signature, including the class in which they are defined, for example:
Scheduler.class.schedule(...)
ContextProvider.class.provideComputationContext(...)
SimpleComputationExecutor.class.fulfillPromise(...)
SimpleComputationExecutor.class.computeHere(...)
SimpleComputationExecutor.class.handleLocalResponses(...)
So if you need a quick-and-dirty solution, it might well be that you could get away with just ~10 lines of awk,grep,sort and uniq wizardry to get nice adjacency lists with all your classes as nodes and methods as edges.
I've never tried it, it's just an idea. I cannot guarantee that Java decompilers work well on Scala code.

Doing something like Python's "import" in Scala

Is it possible to use Scala's import without specifying a main function in an object, and without using the package keyword in the source file with the code you wish to import?
Some explanation: In Python, I can define some functions in some file "Lib.py", write
from Lib import *
in some other file "Run.py" in the same directory, use the functions from Lib in Run, and then run Run with the command python Run.py. This workflow is ideal for small scripts that I might write in an hour.
In Scala, it appears that if I want to include functions from another file, I need to start wrapping things in superfluous objects. I would rather not do this.
Writing Python in Scala is unlikely to yield satisfactory results. Objects are not "superfluous" -- it's your program that is not written in an object oriented way.
First, methods must be inside objects. You can place them inside a package object, and they'll then be visible to anything else that is inside the package of the same name.
Second, if one considers solely objects and classes, then all package-less objects and classes whose class files are present in the classpath, or whose scala files are compiled together, will be visible to each other.
This is as minimal as I could get it:
[$]> cat foo.scala
object Foo {
def foo(): Boolean = {
return true
}
}
// vim: set ts=4 sw=4 et:
[$]> cat bar.scala
object Bar extends App {
import Foo._
println(foo)
}
// vim: set ts=4 sw=4 et:
[$]> fsc foo.scala bar.scala
[$]> export CLASSPATH=.:$CLASSPATH # Or else it can't find Bar.
[$]> scala Bar
true
When you just write simple scripts, use Scala's REPL. There, you can define functions and call them without having any enclosing object or package, and without a main method.
Objects/classes don't have to be in packages, though it's highly recommended. That said, you can also treat singleton objects like packages, i.e., as namespaces for standalone functions, and import their contents as if they were packages.
If you define your application as an object that extends App, then you don't have to define a main method. Just write your code in the body of the object, and the App trait (which extends thespecial DelayedInit trait) will provide a main method that will execute your code.
If just want to write a script, you can forgo the object altogether and just write code without any container, then pass your source file to the interpreter (REPL) in non-interactive mode.