What is the difference between Clojure REPL and Scala REPL? - scala

I’ve been working with Scala language for a few months and I’ve already created couple projects in Scala. I’ve found Scala REPL (at least its IntelliJ worksheet implementation) is quite convenient for quick development. I can write code, see what it does and it’s nice. But I do the procedure only for functions (not whole program). I can’t start my application and change it on spot. Or at least I don’t know how (so if you know you are welcome to give me piece of advice).
Several days ago my associate told me about Clojure REPL. He uses Emacs for development process and he can change code on spot and see results without restarting. For example, he starts the process and if he changes implementation of a function, his code will change his behavior without restart. I would like to have the same thing with Scala language.
P.S. I want to discuss neither which language is better nor does functional programming better than object-oriented one. I want to find a good solution. If Clojure is the better language for the task so let it be.

The short answer is that Clojure was designed to use a very simple, single pass compiler which reads and compiles a single s-expression or form at a time. For better or worse there is no global type information, no global type inference and no global analysis or optimization. Clojure uses clojure.lang.Var instances to create global bindings through a series of hashmaps from textual symbols to transactional values. def forms all create bindings at global scope in this global binding map. So where in Scala a "function" (method) will be resolved to an instance or static method on a given JVM class, in Clojure a "function" (def) is really just a reference to an entry in the table of var bindings. When a function is invoked, there isn't a static link to another class, instead the var is reference by symbolic name, then dereferenced to get an instance of a clojure.lang.IFn object which is then invoked.
This layer of indirection means that it is possible to re-evaluate only a single definition at a time, and that re-evaluation becomes globaly visible to all clients of the re-defined var.
In comparison, when a definition in Scala changes, scalac must reload the changed file, macroexpand, type infer, type check, and compile. Then due to the semantics of classloading on the JVM, scalac must also reload all classes which depend on methods in the class which changed. Also all values which are instances of the changed class become trash.
Both approaches have their strengths and weaknesses. Obviously Clojure's approach is simpler to implement, however it pays an ongoing cost in terms of performance due to continual function lookup operations forget correctness concerns due to lack of static types and what have you. This is arguably suitable for contexts in which lots of change is happening in a short timeframe (interactive development) but is less suitable for context when code is mostly static (deployment, hence Oxcart). some work I did suggests that the slowdown on Clojure programs from lack of static method linking is on the order of 16-25%. This is not to call Clojure slow or Scala fast, they just have different priorities.
Scala chooses to do more work up front so that the compiled application will perform better which is arguably more suitable for application deployment when little or no reloading will take place, but proves a drag when you want to make lots of small changes.
Some material I have on hand about compiling Clojure code more or less cronological by publication order since Nicholas influenced my GSoC work a lot.
Clojure Compilation [Nicholas]
Clojure Compilation: Full Disclojure [Nicholas]
Why is Clojure bootstrapping so slow? [Nicholas]
Oxcart and Clojure [me]
Of Oxen, Carts and Ordering [me]
Which I suppose leaves me in the unhappy place of saying simply "I'm sorry, Scala wasn't designed for that the way Clojure was" with regards to code hot swapping.

Related

Idiomatic Rust plugin system

I want to outsource some code for a plugin system. Inside my project, I have a trait called Provider which is the code for my plugin system. If you activate the feature "consumer" you can use plugins; if you don't, you are an author of plugins.
I want authors of plugins to get their code into my program by compiling to a shared library. Is a shared library a good design decision? The limitation of the plugins is using Rust anyway.
Does the plugin host have to go the C way for loading the shared library: loading an unmangled function?
I just want authors to use the trait Provider for implementing their plugins and that's it.
After taking a look at sharedlib and libloading, it seems impossible to load plugins in a idiomatic Rust way.
I'd just like to load trait objects into my ProviderLoader:
// lib.rs
pub struct Sample { ... }
pub trait Provider {
fn get_sample(&self) -> Sample;
}
pub struct ProviderLoader {
plugins: Vec<Box<Provider>>
}
When the program is shipped, the file tree would look like:
.
├── fancy_program.exe
└── providers
├── fp_awesomedude.dll
└── fp_niceplugin.dll
Is that possible if plugins are compiled to shared libs? This would also affect the decision of the plugins' crate-type.
Do you have other ideas? Maybe I'm on the wrong path so that shared libs aren't the holy grail.
I first posted this on the Rust forum. A friend advised me to give it a try on Stack Overflow.
UPDATE 3/27/2018:
After using plugins this way for some time, I have to caution that in my experience things do get out of sync, and it can be very frustrating to debug (strange segfaults, weird OS errors). Even in cases where my team independently verified the dependencies were in sync, passing non-primitive structs between the dynamic library binaries tended to fail on OS X for some reason. I'd like to revisit this, find what cases it happens in, and perhaps open an issue with Rust, but I'm going to advise caution with this going forward.
LLDB and valgrind are near-essential to debug these issues.
Intro
I've been investigating things along these lines myself, and I've found there's little official documentation for this, so I decided to play around!
First let me note, as there is little official word on these properties please do not rely on any code here if you're trying to keep planes in the air or nuclear missiles from errantly launching, at least not without doing far more comprehensive testing than I've done. I'm not responsible if the code here deletes your OS and emails an erroneous tearful confession of committing the Zodiac killings to your local police; we're on the fringes of Rust here and things could change from one release or toolchain to another.
I have personally tested this on Rust 1.20 stable in both debug and release configurations on Windows 10 (stable-x86_64-pc-windows-msvc) and Cent OS 7 (stable-x86_64-unknown-linux-gnu).
Approach
The approach I took was a shared common crate both crates listed as a dependency defining common struct and trait definitions. At first, I was also going to test having a struct with the same structure, or trait with the same definitions, defined independently in both libraries, but I opted against it because it's too fragile and you wouldn't want to do it in a real design. That said, if anybody wants to test this, feel free to do a PR on the repository above and I will update this answer.
In addition, the Rust plugin was declared dylib. I'm not sure how compiling as cdylib would interact, since I think it would mean that upon loading the plugin there are two versions of the Rust standard library hanging around (since I believe cdylib statically links the Rust stdlib into the shared object).
Tests
General Notes
The structs I tested were not declared #repr(C). This could provide an extra layer of safety by guaranteeing a layout, but I was most curious about writing "pure" Rust plugins with as little "treating Rust like C" fiddling as possible. We already know you can use Rust via FFI by wrapping things in opaque pointers, manually dropping, and such, so it's not very enlightening to test this.
The function signature I used was pub fn foo(args) -> output with the #[no_mangle] directive, it turns out that rustfmt automatically changes extern "Rust" fn to simply fn. I'm not sure I agree with this in this case since they are most certainly "extern" functions here, but I will choose to abide by rustfmt.
Remember that even though this is Rust, this has elements of unsafety because libloading (or the unstable DynamicLib functionality) will not type check the symbols for you. At first I thought my Vec test was proving you couldn't pass Vecs between host and plugin until I realized on one end I had Vec<i32> and on the other I had Vec<usize>
Interestingly, there were a few times I pointed an optimized test build to an unoptimized plugin and vice versa and it still worked. However, I still can't in good faith recommending building plugins and host applications with different toolchains, and even if you do, I can't promise that for some reason rustc/llvm won't decide to do certain optimizations on one version of a struct and not another. In addition, I'm not sure if this means that passing types through FFI prevents certain optimizations such as Null Pointer Optimizations from occurring.
You're still limited to calling bare functions, no Foo::bar because of the lack of name mangling. In addition, due to the fact that functions with trait bounds are monomorphized, generic functions and structs are also out. The compiler can't know you're going to call foo<i32> so no foo<i32> is going to be generated. Any functions over the plugin boundary must take only concrete types and return only concrete types.
Similarly, you have to be careful with lifetimes for similar reasons, since there's no static lifetime checking Rust is forced to believe you when you say a function returns &'a when it's really &'b.
Native Rust
The first tests I performed were on no custom structures; just pure, native Rust types. This would give a baseline for if this is even possible. I chose three baseline types: &mut i32, &mut Vec, and Option<i32> -> Option<i32>. These were all chosen for very specific reasons: the &mut i32 because it tests a reference, the &mut Vec because it tests growing the heap from memory allocated in the host application, and the Option as a dual purpose of testing passing by move and matching a simple enum.
All three work as expected. Mutating the reference mutates the value, pushing to a Vec works properly, and the Option works properly whether Some or None.
Shared Struct Definition
This was meant to test if you could pass a non-builtin struct with a common definition on both sides between plugin and host. This works as expected, but as mentioned in the "General Notes" section, can't promise you Rust won't fail to optimize and/or optimize a structure definition on one side and not another. Always test your specific use case and use CI in case it changes.
Boxed Trait Object
This test uses a struct whose definition is only defined on the plugin side, but implements a trait defined in a common crate, and returns a Box<Trait>. This works as expected. Calling trait_obj.fun() works properly.
At first I actually anticipated there would be issues with dropping without making the trait explicitly have Drop as a bound, but it turns out Drop is properly called as well (this was verified by setting the value of a variable declared on the test stack via raw pointer from the struct's drop function). (Naturally I'm aware drop is always called even with trait objects in Rust, but I wasn't sure if dynamic libraries would complicate it).
NOTE:
I did not test what would happen if you load a plugin, create a trait object, then drop the plugin (which would likely close it). I can only assume this is potentially catastrophic. I recommend keeping the plugin open as long as the trait object persists.
Remarks
Plugins work exactly as you'd expect just linking a crate naturally, albeit with some restrictions and pitfalls. As long as you test, I think this is a very natural way to go. It makes symbol loading more bearable, for instance, if you only need to load a new function and then receive a trait object implementing an interface. It also avoids nasty C memory leaks because you couldn't or forgot to load a drop/free function. That said, be careful, and always test!
There is no official plugin system, and you cannot do plugins loaded at runtime in pure Rust. I saw some discussions about doing a native plugin system, but nothing is decided for now, and maybe there will never be any such thing. You can use one of these solutions:
You can extend your code with native dynamic libraries using FFI. To use the C ABI, you have to use repr(C), no_mangle attribute, extern etc. You will find more information by searching Rust FFI on the internets. With this solution, you must use raw pointers: they come with no safety guarantee (i.e. you must use unsafe code).
Of course, you can write your dynamic library in Rust, but to load it and call the functions, you must go through the C ABI. This means that the safety guarantees of Rust do not apply there. Furthermore, you cannot use the highest level Rust's functionalities as trait, enum, etc. between the library and the binary.
If you do not want this complexity, you can use a language adapted to expand Rust: with which you can dynamically add functions to your code and execute them with same guarantees as in Rust. This is, in my opinion, the easier way to go: if you have the choice, and if the execution speed is not critical, use this to avoid tricky C/Rust interfaces.
Here is a (not exhaustive) list of languages that can easily extend Rust:
Gluon, a functional language like Haskell
Dyon, a small but powerful scripting language intended for video games
Lua with rlua or hlua
You can also use Python or Javascript, or see the list in awesome-rust.

What can I do to my scala code so it will compile faster?

I have a large scala code base. (https://opensource.ncsa.illinois.edu/confluence/display/DFDL/Daffodil%3A+Open+Source+DFDL)
It's like 70K lines of scala code. We are on scala 2.11.7
Development is getting difficult because compilation - the edit-compile-test-debug cycle is too long for small changes.
Incremental recompile times can be a minute, and this is without optimization turned on. Sometimes longer. And that's with not having edited very many changes into files. Sometimes a very small change causes a huge recompilation.
So my question: What can I do by way of organizing the code, that will improve compilation time?
E.g., decomposing code into smaller files? Will this help?
E.g., more smaller libraries?
E.g., avoiding use of implicits? (we have very few)
E.g., avoiding use of traits? (we have tons)
E.g., avoiding lots of imports? (we have tons - package boundaries are pretty chaotic at this point)
Or is there really nothing much I can do about this?
I feel like this very long compilation is somehow due to some immense amount of recompiling due to dependencies, and I am thinking of how to reduce false dependencies....but that's just a theory
I'm hoping someone else can shed some light on something we might do which would improve compilation speed for incremental changes.
Here are the phases of the scala compiler, along with slightly edited
versions of their comments from the source code. Note that this
compiler is unusual in being heavily weighted towards type checking
and to transformations that are more like desugarings. Other compilers
include a lot of code for: optimization, register allocation, and
translation to IR.Some top-level points:
There is a lot of tree rewriting. Each phase tends to read in a tree
from the previous phase and transform it to a new tree. Symbols, to
contrast, remain meaningful throughout the life of the compiler. So
trees hold pointers to symbols, and not vice versa. Instead of
rewriting symbols, new information gets attached to them as the phases
progress.
Here is the list of phases from Global:
analyzer.namerFactory: SubComponent,
analyzer.typerFactory: SubComponent,
superAccessors, // add super accessors
pickler, // serializes symbol tables
refchecks, // perform reference and override checking,
translate nested objects
liftcode, // generate reified trees
uncurry, // uncurry, translate function values to anonymous
classes
tailCalls, // replace tail calls by jumps
explicitOuter, // replace C.this by explicit outer pointers,
eliminate pattern matching
erasure, // erase generic types to Java 1.4 types, add
interfaces for traits
lambdaLift, // move nested functions to top level
constructors, // move field definitions into constructors
flatten, // get rid of inner classes
mixer, // do mixin composition
cleanup, // some platform-specific cleanups
genicode, // generate portable intermediate code
inliner, // optimization: do inlining
inlineExceptionHandlers, // optimization: inline exception handlers
closureElimination, // optimization: get rid of uncalled closures
deadCode, // optimization: get rid of dead cpde
if (forMSIL) genMSIL else genJVM, // generate .class files
some work around with scala compiler
Thus scala compiler has to do a lot more work than the Java compiler, however in particular there are some things which makes the Scala compiler drastically slower, which include
Implicit resolution. Implicit resolution (i.e. scalac trying to find an implicit value when you make an implicit declartion) bubbles up over every parent scope in the declaration, this search time can be massive (particularly if you reference the same the same implicit variable many times, and its declared in some library all the way down your dependancy chain). The compile time gets even worse when you take into account implicit trait resolution and type classes, which is used heavily by libraries such as scalaz and shapeless.
Also using a huge number of anonymous classes (i.e. lambdas, blocks, anonymous functions).Macros obviously add to compile time.
A very nice writeup by Martin Odersky
Further the Java and Scala compilers convert source code into JVM bytecode and do very little optimization.On most modern JVMs, once the program bytecode is run, it is converted into machine code for the computer architecture on which it is being run. This is called the just-in-time compilation. The level of code optimization is, however, low with just-in-time compilation, since it has to be fast. To avoid recompiling, the so called HotSpot compiler only optimizes parts of the code which are executed frequently.
A program might have different performance each time it is run. Executing the same piece of code (e.g. a method) multiple times in the same JVM instance might give very different performance results depending on whether the particular code was optimized in between the runs. Additionally, measuring the execution time of some piece of code may include the time during which the JIT compiler itself was performing the optimization, thus giving inconsistent results.
One common cause of a performance deterioration is also boxing and unboxing that happens implicitly when passing a primitive type as an argument to a generic method and also frequent GC.There are several approaches to avoid the above effects during measurement,like It should be run using the server version of the HotSpot JVM, which does more aggressive optimizations.Visualvm is a great choice for profiling a JVM application. It’s a visual tool integrating several command line JDK tools and lightweight profiling capabilities.However scala abstracions are very complex and unfortunately VisualVM does not yet support this.parsing mechanisms which was taking a long time to process like cause using a lot of exists and forall which are methods of Scala collections which take predicates,predicates to FOL and thus may pass entire sequence maximizing performance.
Also making the modules cohisive and less dependent is a viable solution.Mind that intermediate code gen is somtimes machine dependent and various architechures give varied results.
An Alternative:Typesafe has released Zinc which separates the fast incremental compiler from sbt and lets the maven/other build tools use it. Thus using Zinc with the scala maven plugin has made compiling a lot faster.
A simple problem: Given a list of integers, remove the greatest one. Ordering is not necessary.
Below is version of the solution (An average I guess).
def removeMaxCool(xs: List[Int]) = {
val maxIndex = xs.indexOf(xs.max);
xs.take(maxIndex) ::: xs.drop(maxIndex+1)
}
It's Scala idiomatic, concise, and uses a few nice list functions. It's also very inefficient. It traverses the list at least 3 or 4 times.
Now consider this , Java-like solution. It's also what a reasonable Java developer (or Scala novice) would write.
def removeMaxFast(xs: List[Int]) = {
var res = ArrayBuffer[Int]()
var max = xs.head
var first = true;
for (x <- xs) {
if (first) {
first = false;
} else {
if (x > max) {
res.append(max)
max = x
} else {
res.append(x)
}
}
}
res.toList
}
Totally non-Scala idiomatic, non-functional, non-concise, but it's very efficient. It traverses the list only once!
So trade-offs should also be prioritized and sometimes you may have to work things like a java developer if none else.
Some ideas that might help - depends on your case and style of development:
Use incremental compilation ~compile in SBT or provided by your IDE.
Use sbt-revolver and maybe JRebel to reload your app faster. Better suited for web apps.
Use TDD - rather than running and debugging the whole app write tests and only run those.
Break your project down into libraries/JARs. Use them as dependencies via your build tool: SBT/Maven/etc. Or a variation of this next...
Break your project into subprojects (SBT). Compile separately what's needed or root project if you need everything. Incremental compilation is still available.
Break your project down to microservices.
Wait for Dotty to solve your problem to some degree.
If everything fails don't use advanced Scala features that make compilation slower: implicits, metaprogramming, etc.
Don't forget to check that you are allocating enough memory and CPU for your Scala compiler. I haven't tried it, but maybe you can use RAM disk instead of HDD for your sources and compile artifacts (easy on Linux).
You are touching one of the main problems of object oriented design (over engineering), in my opinion you have to flatten your class-object-trait hierachy and reduce the dependecies between classes. Brake packages to different jar files and use them as mini libraries which are "frozen" and concentrate on new code.
Check some videos also from Brian Will, who makes a case against OO over-engineering
i.e https://www.youtube.com/watch?v=IRTfhkiAqPw (you can take the good points)
I don't agree with him 100% but it makes a good case against over-engineering.
Hope that helps.
You can try to use the Fast Scala Compiler.
Asides minor code improvements like (e.g #tailrec annotations), depending on how brave you feel, you could also play around with Dotty which boasts faster compile times among other things.

Why is Scala's Type system not a Library in Clojure

I've heard people claim that:
Scala's type system is amazing (existential types, variant, co-variant)
Because of the power of macros, everything is a library in Clojure: (pattern matching, logic programming, non-determinism, ..)
Question:
If both assertions are true, why is Scala's type system not a library in Clojure? Is it because:
types are one of these things that do not work well as a library? [i.e. the changes would somehow have to threaded through every existing clojure library, including clojure.core?]
is Scala's notion of types fundamentally incompatible with clojure protocol / records?
... ?
It's an interesting question.
You are certainly right about Scala having an amazing type system, and about Clojure being phenomenal for meta-programming and extension of the language (although that is about more than just macros....).
A few reasons I can think of:
Clojure is a dynamically typed language while Scala is a statically typed language. Having powerful type inference isn't so much use in a language where you can assume relatively little about the types of your inputs.
Clojure already has a very interesting project to add typing as a library (Typed Clojure) which looks very promising - however it's very different in approach to Scala as it is designed for a dynamic language from the start (inspired more by Typed Racket, I believe).
Clojure philosophy actually discourages certain OOP concepts (particularly implementation inheritance, mutable objects, and data encapsulation). A type system that supports these things (as Scala does) wouldn't be a good fit for Clojure idioms - at best they would be ignored, but they could easily encourage a style of development that would cause people to run into severe problems later.
Clojure already provides tools that solve many of the problems you would typically solve with types in other languages - e.g. the use of protocols for polymorphism.
There's a strong focus in the Clojure community on simplicity (in the sense of the excellent video "Simple Made Easy" - see particularly the slide at 39:30). While Scala's type system is certainly amazing, I think it's a stretch to describe it as "Simple"
Putting in a Scala-style type system would probably require a complete rewrite of the Clojure compiler and make it substantially more complex. Nobody seems to have signed up so far to take on that particular challenge... and there's a risk that even if someone were willing and able to do this then the changes could be rejected for the various cultural / technical reasons covered above.
In the absence of a major change to Clojure itself (which I think would be unlikely) then one interesting possibility would be to create a DSL within Clojure that provided Scala-style type inference for a specific domain and compiled this DSL direct to optimised Java bytecode. I could see that being a useful approach for specific problem domains (large scale numerical data crunching with big matrices, for example).
To simply answer your question "... why is Scala's type system not a library in Clojure?":
Because the type system is part of the scala compiler and not of the scala library. The whole power of scalas type system only exists at compile time. The JVM has no support for things like that, because of type erasure and also, because it would simply slow down execution. And also there is no need for it. If you have a statically typed language, you don't need type information at runtime, unless you want to do dirty stuff.
edit:
#mikera the jvm is sure capable of running the scala compiler, I did not say anything like that. I just said, that the jvm has no support for type systems like that. It does not even support generics. At runtime all these types are gone. The compiler checks for the correctness of a program and removes all the higher kinded types / generics.
example:
val xs: List[Int] = List(1,2,3,4)
val x1: Int = xs.head
will at runtime look like this:
val xs: List = List.apply(1,2,3,4)
val x1: Int = xs.head.asInstanceOf[Int]
But it doesn't matter, because the compiler checked it before. You can only get in trouble here, when you use reflection, because you could put any value in the list and it would break at runtime exactly where the value is casted to Int.
And this is one of the reasons, why the scala type system is not part of the scala library, but built into the compiler.
And also the question of the OP was "... why is Scala's type system not a library in Clojure?" and not "Is it possible to create a type system such as scalas for clojure?" and I perfectly answered that question.

Elegant AST model

I am in the process of writing a toy compiler in scala. The target language itself looks like scala but is an open field for experiment.
After several large refactorings I can't find a good way to model my abstract syntax tree. I would like to use the facilities of scala's pattern matching, the problem is that the tree carries moving information (like types, symbols) along the compilation process.
I can see a couple of solutions, none of which I like :
case classes with mutable fields (I believe the scala compiler does this) : the problem is that those fields are not present a each stage of the compilation and thus have to be nulled (or Option'd) and it becomes really heavy to debug/write code. Moreover, if for exemple, I find a node with null type after the typing phase I have a really hard time finding the cause of the bug.
huge trait/case class hierarchy : something like Node, NodeWithSymbol, NodeWithType, ... Seems like a pain to write AND work with
something completly hand crafted with extractors
I'm also not sure if it is good practice to go with a fully immutable AST, especially in scala where there is no implicit sharing (because the compiler is not aware of immutability) and it could hurt performances to copy the tree all the time.
Can you think of an elegant pattern to model my tree using scala's powerful type system ?
TL;DR I prefer to keep the AST immutable and carry things like type information in a separate structure, e.g. a Map, that can be referred by IDs stored in the AST. But there is no perfect answer.
You're by no means the first to struggle with this question. Let me list some options:
1) Mutable structures that get updated at each phase. All the up and downsides you mention.
2) Traits/cake pattern. Feasible, but expensive (there's no sharing) and kinda ugly.
3) A new tree type at each phase. In some ways this is the theoretically cleanest. Each phase can deal only with a structure produced for it by the previous phase. Plus the same approach carries all the way from front end to back end. For instance, you may "desugar" at some point and having a new tree type means that downstream phase(s) don't have to even consider the possibility of node types that are eliminated by desugaring. Also, low level optimizations usually need IRs that are significantly lower level than the original AST. But this is also a lot of code since almost everything has to be recreated at each step. This approach can also be slow since there can be almost no data sharing between phases.
4) Label every node in the AST with an ID and use that ID to reference information in other data structures (maps and vectors and such) that hold information computed for each phase. In many ways this is my favorite. It retains immutability, maximizes sharing and minimizes the "excess" code you have to write. But you still have to deal with the potential for "missing" information that can be tricky to debug. It's also not as fast as the mutable option, though faster than any option that requires producing a new tree at each phase.
I recently started writing a toy verifier for a small language, and I am using the Kiama library for the parser, resolver and type checker phases.
Kiama is a Scala library for language processing. It enables convenient analysis and transformation of structured data. The programming styles supported by the library are based on well-known formal language processing paradigms, including attribute grammars, tree rewriting, abstract state machines, and pretty printing.
I'll try to summarise my (fairly limited) experience:
[+] Kiama comes with several examples, and the main contributor usually responds quickly to questions asked on the mailing list
[+] The attribute grammar paradigm allows for a nice separation into "immutable components" of the nodes, e.g., names and subnodes, and "mutable components", e.g., type information
[+] The library comes with a versatile rewriting system which - so far - covered all my use cases
[+] The library, e.g., the pretty printer, make nice examples of DSLs and of various functional patterns/approaches/ideas
[-] The learning curve it definitely steep, even with examples and the mailing list at hand
[-] Implementing the resolving phase in a "purely function" style (cf. my question) seems tricky, but a hybrid approach (which I haven't tried yet) seems to be possible
[-] The attribute grammar paradigm and the resulting separation of concerns doesn't make it obvious how to document the properties nodes have in the end (cf. my question)
[-] Rumour has it, that the attribute grammar paradigm does not yield the fastest implementations
Summarising my summary, I enjoy using Kiama a lot and I strongly recommend that you give it a try, or at least have a look at the examples.
(PS. I am not affiliated with Kiama)

Scala: Lazy baking and runtime compilation of cake pattern

One of the great limitations of the cake pattern is that its static. I would like to be able to mix-in traits potentially written by different coders completely independently. However the traits would not need to be mixed-in frequently. The user would have an initialisation screen where they would choose the traits / assemblies, before the main application was run. So the thought occurred to me why not mix-in and compile the chosen traits from with in the user choice selection module. If the compilation failed, no problem the user would just get back some message - incompatible assemblies or what ever. If the compilation succeeded then the top UI module would load the newly compiled classes with the pre-compiled parts of the assemblies and run the main application. Note there might only need to be one or two classes compiled duruing run time initialisation. All the rest of the code could have been compiled normally.
I'm pretty new to Scala. Is this a recognised pattern? Is there any support for it? It seems mad to have to use Guice for a relative simple dependency situation. Can I run the Scala compiler easily from within an application? Can I run it in memory and its outputs be used from memory without unnecessary file creation?
Note: Although appearing to be dynamic, this methodology would remain 100% static.
Edit it occurs to that one of the drives of Microsoft's Roslyn project was to enable just this sort of thing for C# and Visual Basic. But that seems to have been a pretty big project even for a high powered Microsoft team.
Calling the compiler directly from within Scala is doable, but not for the timid. Luckily, the good people at Twitter have automated the process for you. (140 character celebrity micro-blogging, and some cool Scala utilities! Thanks Twitter.) You can use the com.twitter.utils.Eval class to compile and evaluate Scala strings. In your example, you would do something like
val eval = new Eval()
val myObj = eval[BaseClass]("new BaseClass extends " + traitNameList.mkString(" with "))
This will create you a new object with all of the traits you desire built in. The question then arises as to whether this is a good idea. Downsides:
Calling out to the Scala compiler is not quick
If you do this enough, you will overload the PermGen space, as the classes you create will never be garbage collected
This really is more of the sort of thing you want a dynamic language for rather than Scala. You're likely to find places where this all kinds of works, but clashes with the rest of your architecture (yes, that's vague).