code generation using Treehugger scala - scala

I am using TreeHugger to generate code at runtime. I could not find many documents related to it. My question is, if I generate classes using treehugger, will I be able to access those classes in future?
To be precise: I want to read data coming from files like CSV and create classes at runtime . Can I use that class in future, say in the next class generated at runtime.
I am really new to scala, please forgive if I am not clear in explaining.
Thanks a lot!

I've done something similar, so I'll share what I've learned:
Treehugger ultimately generates code (strings) at runtime to be used in a subsequent, separate run (or I suppose to be eval'd at runtime, but I never got that to work).
So the course of action depends on what you mean by "runtime":
Are your .csv files only available at runtime? If you have access to the files at compile time (as is often the case), then are examples of your two options: experimental (scala macros) or traditional (sbt plugin) -- both approaches are similar but have subtle pros and cons.
If you only have access to the files at runtime, but still need to generate and "type" the classes and make the compiler expect them, then it seems to me that somebody has made a bad design mistake! But if you find yourself stuck in this circumstance, then it is possible to define and load classes at runtime with a bytecode-engineering library and some type-checker black magic (runtime type provider).

Related

Compile only one class in a Scala program that contains more than one classes?

I know a Scala file, as in Java, should better define only one class. But now I have someone else's code. The code defines two classes. I 'd like to compile one of them because the other one does not compile. Of course I could have commented out the other one, but I have a large number of such files and I am looking for an automated solution for doing so. Any idea?
The Scala compiler works on whole files. The only way I can imagine doing this is to write a script which comments out all classes you don't want, runs scalac and then removes the comment markers again (well, you could also use Scala compiler as a library to get equivalent results without doing this literally). Needless to say, I don't think it's actually a good idea, but it's possible.

Scala Metaprogramming at Runtime

I'm building a tool that will receive unpredictable data structure, and I want to generate case class to accomplish the structure of the received data.
I'm trying to figure out if it's possible to generate case class at runtime? This structure will be know only at runtime.
It's something similar to what macro does, but in runtime.
I've found this project on the internet
mars
Which is very close to what I want to do ,but I couldn't find if it was successful of not.
Another way of doing it is generate the code, compile and put the result in the classpath, like IScala is doing to use the code in an iterative way. But I don't think that this will scale.
Does anybody has already done something like runtime code generation?
This question was also posted in scala-user mailing list
UPDATE: (as per the comments)
If all you want is throw-away code generated at runtime to be fed into to a library that cannot work with just lists and maps, and not code to be stored and used later, it would make sense to look for solutions to this problem for Java or JVM. That is, unless the library requires some Scala specific features not available to vanilla JVM bytecode (Scala adds some extras to the bytecode, which Java code doesn't need/have).
what is the benefit of generating statically typed code dynamically? as opposed to using a dynamic data structure.
I would not attempt that at all. Just use a structure such as nested lists and maps.
Runtime code generation is one of the purposes of the Mars Project. Mars is under development, at the moment there is no release version. Mars requires its own toolchain to expand macros at runtime and should use several features unique to scala.meta (http://scalameta.org/), for example, AST interpretation and AST persistence. Currently we are working on ASTs typechecking in scala-reflect, required for runtime macros expansion.

Scala: Lazy baking and runtime compilation of cake pattern

One of the great limitations of the cake pattern is that its static. I would like to be able to mix-in traits potentially written by different coders completely independently. However the traits would not need to be mixed-in frequently. The user would have an initialisation screen where they would choose the traits / assemblies, before the main application was run. So the thought occurred to me why not mix-in and compile the chosen traits from with in the user choice selection module. If the compilation failed, no problem the user would just get back some message - incompatible assemblies or what ever. If the compilation succeeded then the top UI module would load the newly compiled classes with the pre-compiled parts of the assemblies and run the main application. Note there might only need to be one or two classes compiled duruing run time initialisation. All the rest of the code could have been compiled normally.
I'm pretty new to Scala. Is this a recognised pattern? Is there any support for it? It seems mad to have to use Guice for a relative simple dependency situation. Can I run the Scala compiler easily from within an application? Can I run it in memory and its outputs be used from memory without unnecessary file creation?
Note: Although appearing to be dynamic, this methodology would remain 100% static.
Edit it occurs to that one of the drives of Microsoft's Roslyn project was to enable just this sort of thing for C# and Visual Basic. But that seems to have been a pretty big project even for a high powered Microsoft team.
Calling the compiler directly from within Scala is doable, but not for the timid. Luckily, the good people at Twitter have automated the process for you. (140 character celebrity micro-blogging, and some cool Scala utilities! Thanks Twitter.) You can use the com.twitter.utils.Eval class to compile and evaluate Scala strings. In your example, you would do something like
val eval = new Eval()
val myObj = eval[BaseClass]("new BaseClass extends " + traitNameList.mkString(" with "))
This will create you a new object with all of the traits you desire built in. The question then arises as to whether this is a good idea. Downsides:
Calling out to the Scala compiler is not quick
If you do this enough, you will overload the PermGen space, as the classes you create will never be garbage collected
This really is more of the sort of thing you want a dynamic language for rather than Scala. You're likely to find places where this all kinds of works, but clashes with the rest of your architecture (yes, that's vague).

Environment properties files in scala project

Just starting to learn scala for a new project. Have got to the point where I would like to define different properties files for the different environments the app is going to run on, ideally in a similar way to Rails - very lightweight, just one different properties file per environment that is loaded based on its name. I don't really care if it's a java properties file, YML or scala code.
In the spirit of not reinventing the wheel I've been looking to see if there is some accepted standard Scala way of doing this but I can't find one, I've found a few similar but not identical questions here where people suggest using system properties in the startup script but this feels like it would end up being a nightmare.
I could obviously implement it if needs be but feels like the sort of thing that should already exist. So - does it?
I'm using sbt if that makes a difference.
I know of Configgy. Also, Akka/Play 2.0 will be using Config, which looks nice too. See blog about the latter.
Basically, Configgy has been used for a while now, but has been deprecated, while Config will be all-new. However, having Config as the default Typesafe Stack configuration tool will probably make it the preferred tool for that pretty fast.
I have written a Configgy replacement called Configrity. It can use different input formats (like YAML), it's immutable, supports functional patterns and uses type class to convert automatically the values to the desired type.
I have written BeeConfig, a replacement for java.util.Properties except that it is a Scala API and uses UTF-encoded configuration files. It supports string interpolation, chaining and a bunch of other features. But its main objective is simplicity.
Bitbucket | Blog post
Rick

How do I generate new source code in text form in a Scala compiler plugin?

I have just finished the first version of a Java 6 compiler plugin, that automatically generates wrappers (proxy, adapter, delegate, call it what you like) based on an annotation.
Since I am doing mixed Java/Scala projects, I would like to be able to use the same annotation inside my Scala code, and get the same generated code (except of course in Scala). That basically means starting from scratch.
What I would like to do, and for which I haven't found an example yet, is how do I generate the code inside a Scala compiler plugin in the same way as in the Java compiler plugin. That is, I match/find where my annotation is used, get the AST for the annotated interface, and then ask the API to give me a Stream/Writer in which I output the generated Scala source code, using String manipulation.
That last part is what I could not find. So how do I tell the API to create a new Scala source file, and give me a Stream/Writer/File/Handle, so I can just write in it, and when I'm done, the Scala compiler compiles it, within the same run in which the plugin was invoked?
Why would I want to do that? Firstly, because than both plugins have the same structure, so maintenance is easy. Secondly, I want to open source it, and there is just no way to support every option that anyone would want, so I expect potential users to want to extend the generation with their own code. This will be a lot easier for them if they just have to do some printf(), instead of learning the AST API (this also applies to me).
Short answer:
It can't be done
Long answer:
You could conceivably generate your source file and push that through a parser instance within your plugin. But not in any way that's likely to be of any use to you, because you'd now have a bigger problem to contend with:
In order to grab all the type/name information for generating the delagate/proxy, you'll have to pick up the annotated type's AST after it has run through both the namer and typer phases (which are inseperable). The catch is that any attempts to call your generated code will already have failed typechecking, the compiler will have thrown an error, and any further bets are off.
Method synthesis is possible in limited cases, so long as you can somehow fool the typechecker for just long enough to get your code generated, which is the trick I pulled with my Autoproxy 'lite' plugin. Even then, you're far better off working with TreeDSL to generate code instead of pumping out raw source.
Kevin is entirely correct, but just for completeness it's worth mentioning that there is another alternative - write a compiler plugin that generates source. This is the approach that I've adopted in Borachio. It's not a very satisfactory solution, but it can be made to work.
Edit - I just reread your question and realised that you're actually asking about generating source anyway
So there is no support for this directly, but it's basically just a question of opening a file and writing the relevant "print" statements. There's no way to invoke the compiler "inside" a plugin AFAIK, but I've written an sbt plugin which hides most of the complexity of invoking the compiler twice.