Is Scala incompatible with itself? [duplicate] - scala

Why is Scala binary incompatible between different releases?

It has to do with the way traits are compiled, because traits are kind of like interfaces but they can contain implementation. This makes it so it is VERY easy to make changes that don't break source compatibility but break binary compatibility, because when you add a new method to a trait along with an implementation, you have to recompile everything that implements that trait so that they will pickup that implementation. There's probably other issues, too, but I think they're mostly along the same lines.

Lack of JVM support for Scala-specific features, such as traits mentioned, and the fact that it is actively evolving.

Here's background on this, straight from Odersky, if you want to understand the specific language issues that cause problems:
http://www.scala-lang.org/node/9346
It's worth reading in conjunction with this post from David Pollack if you are new to the issue and want to understand the impact this can have on applications:
http://lift.la/scalas-version-fragility-make-the-enterprise

I've implemented support for Scala in the japi-compliance-checker 1.6 and performed analysis of backward compatibility for all versions of Scala (both binary- and source-compatibility).
So now you can view breaking changes in details. The report is available here: http://abi-laboratory.pro/java/tracker/timeline/scala/
The report is updated every other day, so you can monitor changes in the recent versions of Scala.

It's still relatively young and undergoing active development.
There are some changes in the new release that were anxiously awaited and that help with a lot of problems, but it wasn't possible to make them backward compatible.
Because Sun is kind of restrictive about updates, Java changes rather slowly, and usually tries to remain backward compatible to the bitter end. Sometimes this stands in the way of progress, but big companies love a stable language.
Scala, on the other hand, is in the hands of a small group of academics, and it's not (yet) widely used in the industry, so they have (or take) some more freedom with changes.

Umm, no. Get your facts straight.
There was no recompilation needed* when going from 2.7.2.3b1 -> 2.7.2.3b2, which was a real relief for me because of the large customer base we had with entrenched legacy code using 2.7.2.3b1 features.
*Caveat - unless you foolishly used code in scala.collection._ or scala.xml._

Related

How seamless will be dotty/scala3 integration with tech like scala-native and scala-js?

Are there any limitations we should be aware of? Will it require us to use some scalafix like tools? or will it work out of box?
Migration from 2.13 to 3.0 in general:
Dotty uses 2.13 collections so no need to change things here - as a matter of the fact 2.13 would be so close to 3.0 that maintainers decided to skip 2.14 release which was supposed to serve as a stepping stone
macros will need to be rewritten - that is the biggest issue, but library maintainers have some time to do it, and some are rewriting things even now (see quill)
there will be few deprecations, e.g. forSome syntax for existential types disappears (see: Dropped features on documentation)
libraries might need to extends themselves to support new stuff (union/intersection/opaque types) but until you start using new things in your code everything works as before
other than that old Scala code should work without any changes
Scalafix is being used on prod even now, e.g. Scala Steward is able to apply migrations as it updates libraries to a new version.
Scala.js is already supported as Dotty backend next to JVM one.
Recently Scala Center took over Scala-native, so we should expect that Scala-native development will speed up (it was a bit stalled) and it should eventually land as another supported backend. I cannot tell if they manage to deliver before the release of Dotty, but I doubt it. For now, Scala-native would have to get support for 2.12 and/or 2.13 first. Trace this issue if you want to know or ask on Gitter.
Long story short: you would need to wait for libraries you use to get ported to Dotty, then update your macros if you wrote any, besides that migration should be pretty much straightforward for JVM and JS backends. Scala native will probably take more time.

What is the most mature library for building a Data Analytics Pipeline in Java/Scala for Hadoop?

I found many options recently, and interesting in their comparisons primarely by maturity and stability.
Crunch - https://github.com/cloudera/crunch
Scrunch - https://github.com/cloudera/crunch/tree/master/scrunch
Cascading - http://www.cascading.org/
Scalding https://github.com/twitter/scalding
FlumeJava
Scoobi - https://github.com/NICTA/scoobi/
As I'm a developer of Scoobi, don't expect an unbiased answer.
First of all, FlumeJava is an internal google project that provides a (awesomely productive) abstraction ontop of MapReduce (not hadoop though). They released a paper about it, which is what projects like Scoobi and Crunch are based on.
If your only criteria is the maturity -- I guess Cascading is your best bet.
However, if you're looking for the (imho superior) FlumeJava style abstraction, you'll want to pick between (S)crunch and Scoobi.
The biggest difference, superficial as it may be is that crunch is written in Java, with Scala bindings (Scrunch). And Scoobi is written in Scala with Java bindings (scoobij). They're both really solid choices, and you won't go wrong which ever you choose. I'm sure there's quite a similar story with Crunch, but Scoobi is being used in real projects and is under continual development. We're pretty very active in fixing bugs and implementing features.
Anyway, they're both great projects with great people behind them and were both released within days of each other. They provide the same abstraction (with similiar api), so switching between the two won't be an issue in the slightest. My recommendation is to give them both a try, and see what works for you. There' no lock in in either project, so you don't need to commit :)
And if you have any feedback for either project, please be sure to provide it :)
I'm a big Scoobi fan myself and I've used it in production. I like the way it allows you to write type-safe Hadoop programs in a very idiomatic Scala way. If that is not necessarily your thing and you like the Cascading model but are scared off by the huge amount of boilerplate code you'd have to write, Twitter has recently open sourced its own Scala abstraction layer on top of Cascading called Scalding.
Announcement: https://dev.twitter.com/blog/scalding
GitHub: https://github.com/twitter/scalding
I guess it's all a matter of taste at this point since feature-wise most of the frameworks are very close to one another.
Scalding also has the advantage of significant open source projects built atop it, such as Matrix API and Algebird.
Here are some examples:
http://sujitpal.blogspot.com/2012/08/scalding-for-impatient.html
Cascalog was released almost two years before Scalding, and arguably has more advanced features for building robust workflows:
https://github.com/nathanmarz/cascalog/wiki

Are there any disadvantages of using C# 3.0 features?

I like C# 3.0 features especially lambda expressions, auto implemented properties or in suitable cases also implicitly typed local variables (var keyword), but when my boss revealed that I am using them, he asked me not to use any C# 3.0 features in work. I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Since we are targeting .NET Framework 3.5, I cannot see any reason for these restrictions. In my opinion, maybe the only disadvantage is that my few co-workers and the boss (also a programmer) would have to learn some basics of C# 3.0 which should not be difficult. What do you think about it? Is my boss right and am I missing something? Are there any good reasons for such a restriction in a development company where C# is a main programming language?
I have had a similar experience (asked not to use Generics, because the may be confusing to my colleagues).
The fact is, that we now use generics and non of my colleagues are having a problem with them. They may not have grasped how to create generic classes, but they sure do understand how to use them.
My opinion on that is that any developer can learn how to use these language features. They may seem advanced at first but as people get used to them the shock of newness lessens.
The main argument for using these features (or any new language features) is that this is a simple and easy way to help my colleagues advance their skills, rather than stagnating.
As for your particular problem - not using lambdas. Lots of the updates to the BCL have overloads that take delegates as parameters - these are in many cases most easily expressed as lambdas, not using them this way is ignoring some of the new and updated uses of the BCL.
In regards to the issues with your peers not being able to learn lambdas - I found that Jon Skeets C# in depth deals with how they evolved from delegates in a manner that was easy to follow and real eye opener. I would recommend you get a copy for your boss and colleagues.
You boss is going to need to understand that language (and other) improvements are designed to give developers more capabilities, and make them more efficient in completing the task at hand, and that if he is not going to allow them for unknown reasons then:
The development team isn't producing at its greatest potential.
The company isn't benefiting from increased efficiency/productivity.
like others have said developers aren't worth their salt if they can't keep up with some of the latest improvements in the language that they are using on a daily basis. I suspect your boss hasn't done much coding lately and it is his inability to understand the latest language improvements that has motivated this decision.
I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Presumably roughly translates to your boss meaning...
These features are confusing for me, and I don't find them useful because I don't understand them.
Which is fairly symptomatic of the Blub paradox (well, or just sheer laziness). Either way there's no merit in what he's saying, and you should start looking for another job if he continues down that road.
If the project is strictly C# 3+ from now on, then you would not break the build by including these items. However, before using them you should be aware of the following:
You can't use them if the project lead gets to make the decision and votes no.
Other than that, you should use them where it makes the code significantly easier to maintain.
You should not use them in ways that are confusing, or unnecessary in the sense that they do not significantly improve the maintainability of the code. This does mean you should not use them where the code is effectively the same or barely improved.
If Microsoft didn't define the standard and these were features that they added to a non-Microsoft language, I would say your boss might have a point. However, since Microsoft defines the language and uses these very features in implementing significant parts of .NET 3.5 (and 4.0), I'd say that you'd be foolish to ignore them. You may not choose to use some of them -- var, for instance, may not be acceptable in all environments due to coding standards -- but a blanket policy of avoiding new features seems unreasonable.
The trickier bit is when should you start using new features, because they can be confusing and may delay development. In general, I choose to use new language features and platform elements on new projects. I often avoid using them on projects that are currently in development when the feature/framework enhancement comes out, deferring until the next project. On a long project, I might introduce them at a significant milestone if the amount of rearchitecting is small or the feature is worth the changes. Normally, I'd wait until the project is due for significant changes anyway and then evaluate if refactoring to newer features is warranted.
The jury is still out on the long term consequences of some features, but if their main rationale is 'it is confusing to other developers' or something similar than I would be concerned about the quality of the talent.
I like C# 3.0 features especially
lambda expressions, auto implemented
properties or in suitable cases also
implicitly typed local variables (var
keyword), but when my boss revealed
that I am using them, he asked me not
to use any C# 3.0 features in work. I
was told that these features are not
standard and confusing for most
developers and its usefulness is
doubtful.
He's got a point.
Following that line of thought, let's make a rule against generic collections since List<T> doesn't make any sense (angle brackets? wtf?).
While we're at it, let's eliminate all interfaces (when are you ever gonna need a class without any implementation?).
Hell, let's go ahead eliminate inheritance since its so tricky these days (is-a? has-a? can't we all just be friends?).
And use of recursion is grounds for dismissal (Foo() invokes Foo()? Surely you must be joking!).
Errrm... back to reality.
Its not that C# 3.0 features are confusion to programmers, its that the features are confusing to your boss. He's familiar with one technology and stubbornly refuses to part with it. You're about to enter the Twilight Zone Blub Paradox:
Programmers get very attached to their
favorite languages, and I don't want
to hurt anyone's feelings, so to
explain this point I'm going to use a
hypothetical language called Blub.
Blub falls right in the middle of the
abstractness continuum. It is not the
most powerful language, but it is more
powerful than Cobol or machine
language.
And in fact, our hypothetical Blub
programmer wouldn't use either of
them. Of course he wouldn't program in
machine language. That's what
compilers are for. And as for Cobol,
he doesn't know how anyone can get
anything done with it. It doesn't even
have x (Blub feature of your choice).
As long as our hypothetical Blub
programmer is looking down the power
continuum, he knows he's looking down.
Languages less powerful than Blub are
obviously less powerful, because
they're missing some feature he's used
to. But when our hypothetical Blub
programmer looks in the other
direction, up the power continuum, he
doesn't realize he's looking up. What
he sees are merely weird languages. He
probably considers them about
equivalent in power to Blub, but with
all this other hairy stuff thrown in
as well. Blub is good enough for him,
because he thinks in Blub.
When we switch to the point of view of
a programmer using any of the
languages higher up the power
continuum, however, we find that he in
turn looks down upon Blub. How can you
get anything done in Blub? It doesn't
even have y.
C# 3.0 isn't hard. Sure you can abuse it, but it isn't hard or confusing to any programmer with more than week of C# 3.0 experience. Your boss's skills have just fallen behind and he wants to bring the rest of the team down to his level. DON'T LET HIM!
Continue using anonymous funcs, the var keyword, auto-properties, and what have you to your hearts content. You won't lose your job over it. If he gets pissy about it, laugh it off.
Like it or not, if you plan on using LINQ in any situation, you're going to have to utilize some of the C# 3.0 language specs.
Your boss is going to have to warm up to them if he wants to utilize the feature sets you get from 3.5, which are numerous and worth your time investing in.
Also, from my experience in leading teams, I've found that using the 3.0 specs actually has helped devs readability and understanding of the code base. There's about a weeks worth of time that is spent by the dev trying to understand what the syntax means, but once they get it they much prefer the new way over the old way.
Perhaps you can do a presentation once a week on each feature to everyone and get some of the developers on your side to help convince management of the benefits.
I recently moved from a bleeding edge C# house to a C# house that was running mostly on dot.Net 1.1 and some 2.0 projects, using mostly only 1.1 features. Luckily management stay away from the code. Most of the developers love all the new features in the newer frameworks, they just don't have the time or inclination to figure them out by themselves. Once I managed to show them how they can make their own lives easier they started using them by themselves and we have migrated several projects to gain the new language features and better tool advantages.
Some people are just afraid of change, because maybe you'll make them all look stupid using fancy new technologies. Could also be that your boss doesn't want the team learning new things instead of getting work done the old fasioned way.
The var keyword can certainly be abused, but in most cases reduces redundant code. LINQ is the main thing you want from .Net 3.5 because of the huge time saving in the amount of code you have to write. Your boss should be encouraging you to use it. Also the base class libraries now take delegates are parameters, so you will be limiting yourself a lot by not using them. Lambda's are just some fancy syntactic sugar to make delegates cleaner.
I would refer you to Effectively Integrating into Software Development Teams and Leading by Example. Two really great articles on how to deal with teams that are afraid of change.

Stuck with JVM, Sick of Java... Where to go? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
For the next 3 years I will have to work with the JVM (project requirement) using a very specific third party API. They want Java but I've been given leeway to move away from Java. I was hoping we could move back to the .NET framework so I could develop code in F#, being absolutely in love with OCaml. .NET development has been struck down by our customer. It is a no go.
I've turned to looking, reading, and poking around programming blogs/forums trying to understand which language might appeal to me further: Scala or Clojure. Those seem to have the largest community/fan base. Being experienced with ML languages I see lots of people who compare Scala to ML. However, there are some real naysayers when making this comparison. If Scala was that close to ML my productivity and learning curve would benefit making this switch.
The internet is full of misinformation and wonder if I'm suffering from such. I don't like the syntax of Lisp (don't hurt me!) but if Scala has the warts I'm reading (poor IDE support, in flux Unit testing framework, performance issues) I'm wondering if Clojure is the better option. I want to be productive out of the gate, using functions as first class objects, and minimizing concurrency pain.
So anyways, before I spend too much time on the internet and not working... I'm stuck with the JVM, sick of Java and wondering where to go?
In my opinion, both Clojure and Scala don't have great IDE support, if that's really important to you. That said, here's what I can collect from my reading & experience.
Scala's pros
Faster than Clojure thanks to more static typing
Closer to ML (syntax, type-directed programming)
Bigger standard API (Clojure's APIs grow very slowly, because they want to make sure they find the best idioms before making them public. That said, Clojure still has semi-official supplementary APIs)
Better integration practices with the typical Java toolset (Clojure is still making some choices, so less firmly established yet on this regard)
Older than Clojure (but Clojure is built on top of a very old and proven core: Lisp)
People say it has chances to reach mainstream, while they wouldn't say the same about Clojure
Clojure's pros
Incredibly easy, fast and right concurrency thanks to MVCC-based STM and other concurrency mechanisms
Immutability by default helps doing the right thing first
More stable standard API
When things change, usually you don't have to rewrite any existing code
(Scala's collections are being remade again for 2.8)
(I have also read somewhere that it's common knowledge that Scala's Actors implementation needs a rethinking and rewrite.)
Easier to learn (small language, being a (very-clean) Lisp)
An opportunity for you to grow by learning something different
Clojure's performance will only get better with time; there's still room for nice optimizations in the compiler
Scala's tying to Java feels more limiting than Clojure's (interactions between Scala's and Java's static type systems). One could sometimes say the same about Clojure (Object-Orientation's support is not a 1:1 fit, but support for this will soon get better)
Rich Hickey has a gift for making choices that put Clojure in the position of having technical leading features that will be adopted by other languages in the decades to follow. And he also has a gift for explaining them. So use them today in Clojure, or wait to use them in another language in some number of years. :)
On distributed concurrency
If your concurrency needs are distributed, Clojure doesn't yet have anything for this unless you run it on top of Terracotta or something similar, in which case you'll be able to use all its concurrency features. If you do, you will end up with a better distributed concurrency experience than with Scala's Actors, IMO.
Conclusion
IMO Scala tries to do everything, and succeeds at doing most of it. Clojure doesn't try the same thing, but what it focuses on is more than enough and succeeds so well that most people really knowing Clojure wouldn't want to go back to something else. Disclosure: my personal preference goes, of course, to Clojure. I hope I've been able to be objective in what I wrote.
Have you considered Groovy? I don't think it is quite as functional as Scala/Clojure, but it's certainly a lot more functional than Java**. In general, I can get the same work done in Groovy with about 50% of the code it would take me in Java.
This is because Groovy is syntactically similar to Java and provides seamless access to the JDK libraries, but the addition of a lot of language features (closures, meta-programming, properties) and dynamic typing eliminates almost all the boilerplate associated with Java programming.
** I mean functional in the sense of 'functional programming' rather than 'working correctly'
I'll address the points you raised about Scala.
IDE support:
Scala doesn't have the same level or IDE support Java has -- or, for that matter, that F# should have with VS10.
That said, it has one of the best (maybe even the best?) IDE supports on JVM, outside Java. Right now NetBeans is good enough, and people have consistently said IDEA is still better (hearsay). The Eclipse plugin is unstable though.
But you mentioned a 3-years range, and the IDE support for Scala should be greatly enhanced once Scala 2.8 is out, as it will provide some compiler-support for IDEs. There's no release date defined, but it looks to be within the next six months, maybe three. And the Eclipse plugin will be updated right along with it.
In flux unit testing framework:
Yes, if you meant it is vibrant, evolving and well supported, instead of stagnant and abandoned. ScalaTest, Specs and ScalaCheck are top quality frameworks, compatible between themselves, and compatible with other Java frameworks and libraries, such as JUnit and JMock.
The testing frameworks, in fact, are almost a child poster of what is possible with Scala.
EDIT: Scala has basic unit test support in its standard library (scala.testing.SUnit). However, given that many superior, actively-supported and free alternatives have appeared, this has been deprecated and will likely not be part of the library shipped with Scala 2.8.
Performance issues:
I'm unaware of any, aside from the fact that you can write lousy code, just as with any other language. People not used to functional programming will often do stuff that's not efficient, such as not using tail recursion, or concatenating lists, and the paradigm shift that Scala enables brings that to light.
At any rate, you can write Scala code as fast as Java code (even faster with some upcoming features). And you can write Scala code with functional features almost as fast as Java code.
Quite frankly, get another job.
If you are to spend the next three years feeling uncomfortable on what you're doing, you should consider looking for more attractive alternatives.
Even if you manage to get a language you like, if you are part of a team ( which I guess you are ) the rest of the team might not like that language. If the rest of them code in Java and you in "fill in the blank" programming language, then problems may arise.
It is not that bad after all.
Talk with your boss, let him know how do you feel. Start looking for alternatives and have a nice and professional "leave".
There is no reason why you can't still have a good relationship with your current boss. If eventually they have a new project for .net you may come back. Talk about that also with them. Leave your doors open.
Its not really a zero sum game, learn them all!
ps: i vote for Clojure, i find it the most fun!
You should consider yourself lucky that you can use the JVM, because the JVM is becoming more and more popular for alternative programming languages than Java.
Besides Java there's Groovy, Scala, Clojure (a Lisp dialect on the JVM), JRuby (Ruby on the JVM), Jython (Python on the JVM), Jaskell (Haskell on the JVM), Fan (runs on the JVM as well as the .NET CLR) and lots more, and there's also an OCaml-Java, OCaml that runs on the JVM.
So, there's lots of choice in programming languages on the JVM, from purely functional to simple scripting and anvanced OO languages.
Tool support for Scala and Clojure may be immature, but it's steadily improving.
Since you like F#, then Scala is most likely your best bet. I say try it out and form your own opinion - you might find that the things people gripe about are things that don't matter to you, or things you can work around.
Don't forget jRuby, and note that an IDE is optional for non-Java
I think you have a great situation. How many people get permission to choose the implementation language? With everything available for the JVM having your environment chosen is not much of a restriction.
You won't need great IDE support in the less verbose languages
In a language as powerful as Ruby with no type declarations, you don't need an IDE at all
Scala was developed specifically to cure the verbose-java-blues
Count yourself lucky that you have three years of work lined up :-)
Clojure might be fun and provides functional concurrency-safe design patterns
Noop? http://code.google.com/p/noop/ (experimental though)
In terms of IDE support and other doubts you're having, Clojure doesn't do any better than Scala. And for a person with ML/F# background (or generally that in strictly, statically typed FP languages), you'll definitely find Scala much closer to what you're used to.
If you like ML you might like CAL which is more-or-less Haskell 98 for the JVM.
It is high quality and very stable, and has good IDE support on Eclipse, but sadly is no longer under active development.

When generating code, what language should you generate?

I've worked on a number of products that make use of code generation. It seems to be the only way to achieve both a high degree of user-customizability and high execution speed.
The downside is that we are requiring users to install a compiler (primarily on MS Windows).
This has been an on-going headache, because vendors like MS keep obsoleting compilers, and some users tend to have more than one compiler installed.
We're considering using GNU C, and possibly C++, but even there, there are continual version issues.
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
Ideally there would be some way to produce generated code that would be flexible, run fast, and not expose us to the whims of third-party providers.
Maybe I'm overlooking something simple, like Java. Any ideas would be appreciated. Thanks.
If you're considering C and even assembler, take a look at LLVM first: http://llvm.org
I might be missing some context here, but could you just pin yourself to a specific version? E.g., .NET 2.0 can be installed side by side with .NET 1.1 and .NET 3.5, as well as other versions that will come out in the future. So as long as your code makes use of a specific version of a compiler, what's the problem?
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
That would be called a compiler :)
Why don't you stick to C90?
I haven't heard much of severe violations of standards from gcc's side, if you don't use extensions.
And you can always distribute a certain version of gcc along with your product, say, 4.3.2, giving an option to users to use their own compiler at their own risk.
As long as all code is generated by you (i. e. you don't embed your instructions into other's code), there shouldn't be any problems in testing against this version and using it to compile your libraries.
If you want to generate assembly language code, you may take a look at asmjit.
One option would be to use a language/environment that provides access to the compiler in code; For example, here is a C# example.
Why not ship a GNU C compiler with your code generator? That way you have no version issues, and the client can constantly generate code that is usable.
It sounds like you're looking for LLVM.
Start here: The Code Generation conference
In the spirit of "might not be to late to add my 2 cents" as in #Alvin's answer's case, here is something I'd think about: if your application is meant to last for some years, it is going to face several changes in how applications and systems work.
For instance, let's say you were thinking about this 10 years ago. I was watching Dexter back then, but I guess you actually have memories of how things were at that time. From what I can tell, multithreading was not much of an issue to developers of 2000, and now it is. So Moore's law broke for them. Before that people didn't even care about what will happen in "Y2K".
Speaking of Moore's law, processors are indeed getting quite fast, so maybe certain optimizations won't be even that necessary. And possibly the array of optimizations will be much bigger, some processors are getting optimizations for several server-centric stuff (XML, cryptography, compression and regex! I am surprised such things can get done on a chip) and also spend less energy (which is probably very important for warfare hardware...).
My point being that focusing on what exist today as a platform for tomorrow is not a good idea. Make it work today, and surely it will work tomorrow (backward-compatibility is especially valued by Microsoft, Apple is not bad it seems and Linux is very liberal about making it work as you want).
There is, yes, one thing that you can do. Attach your technology to something that just won't (likely) die, such as Javascript. I'm serious, Javascript VMs are getting terribly efficient nowdays and are just going to get better, plus everyone loves it so it's not going to dissappear suddenly. If needing more efficiency/features, maybe target the CRL or JVM?
Also I believe multithreading will become more and more of an issue. I have a gut feeling the number of processor cores will have a Moore's law of their own. And architectures are more than likely to change, from the looks of the cloud buzz.
PS: In any case, I belive C optimizations of the past are still quite valid under modern compilers!
I would stick to that language that you use for generating that language. You can generate and compile Java code in Java, Python code in Python, C# in C#, and even Lisp in Lisp, etc.
But it is not clear whether such languages are sufficiently fast for you. For top speed I would choose to generate C++ and use GCC for compilation.
Why not use something like SpiderMonkey or Rhino (JavaScript support in Java or C++). You can export your objects to JavaScript namespaces, and your users don't have to compile anything.
Embed an interpreter for a language like Lua/Scheme into your program, and generate code in that language.