Does Scala work well on proprietary JVM's? - scala

My company has a large legacy Java code base and many of our customers run WebSphere and WebLogic. We are considering starting to use Scala but have been unable to confirm that Scala (2.9.X) works well with IBM's JDK (and BEA's JRockit).
Since these JVM's passes the TCK I would say that it should just work, but given the various problems I have had with the different JVM's over the years I am a little nervous. Are there any gotchas to be aware of when using scala with other JVM's ?
Any compiler flags to use (or avoid) ?
Should I compile the code using Scala on hotspot or on the customers JVM ?
Any problems with mixing JAR's compiled using different versions of Scala/Java on different JVM's ?
Any war stories, links and suggestions are welcome.

The Scala compiler should produce the same byte-code regardless of the JVM you use. I would expect Scala to run on all three platforms however HotSpot has tried to optimise for dynamic languages and might be slightly better. (Possibly not enough to worry about)
In recent years there has been less and less difference between these platforms and in the near future I expect them all to be directly based on OpenJDK (as IBM has agreed to support OpenJDK now) The JRockit and Hotspot teams have been merged for some time since Oracle owns both.
However if you are not running recent version of the JDK, you may see some issue.
JVMs talk to each other very well and I would consider running Scala in its own JVM to isolate any concerns you might have.

Yes, Scala works on non-Sun JVM. Consider, for instance, these two comments from the source code:
//print SourceAnnotation in a predefined way to insure
// against difference in the JVMs (e.g. Sun's vs IBM's)
// on IBM J9 1.6 do not use ForkJoinPool
There aren't many of these. After all, the various JVM are supposed to be compatible -- and tested for it. But, whereas issues arise, action is taken to make sure things run smoothly.

Nothing I could think of.
The compiler shouldn't make a difference, in fact if running scalac on different VM would generate different bytecode, it is definitely a bug.
You should always run Scala code with the same version of Scala it was compiled with. Code compiled on 2.x won't run on 2.x+1 by default. Code compiled on 2.x.y should run on 2.x.y+1, though.
I agree though, that it would be nice to get licenses from third-party vendors like IBM or Azul to include those platforms into testing.

Related

How seamless will be dotty/scala3 integration with tech like scala-native and scala-js?

Are there any limitations we should be aware of? Will it require us to use some scalafix like tools? or will it work out of box?
Migration from 2.13 to 3.0 in general:
Dotty uses 2.13 collections so no need to change things here - as a matter of the fact 2.13 would be so close to 3.0 that maintainers decided to skip 2.14 release which was supposed to serve as a stepping stone
macros will need to be rewritten - that is the biggest issue, but library maintainers have some time to do it, and some are rewriting things even now (see quill)
there will be few deprecations, e.g. forSome syntax for existential types disappears (see: Dropped features on documentation)
libraries might need to extends themselves to support new stuff (union/intersection/opaque types) but until you start using new things in your code everything works as before
other than that old Scala code should work without any changes
Scalafix is being used on prod even now, e.g. Scala Steward is able to apply migrations as it updates libraries to a new version.
Scala.js is already supported as Dotty backend next to JVM one.
Recently Scala Center took over Scala-native, so we should expect that Scala-native development will speed up (it was a bit stalled) and it should eventually land as another supported backend. I cannot tell if they manage to deliver before the release of Dotty, but I doubt it. For now, Scala-native would have to get support for 2.12 and/or 2.13 first. Trace this issue if you want to know or ask on Gitter.
Long story short: you would need to wait for libraries you use to get ported to Dotty, then update your macros if you wrote any, besides that migration should be pretty much straightforward for JVM and JS backends. Scala native will probably take more time.

How many Scala REPLs can live in one JVM?

I have noticed that there can be two scala REPL in one JVM and you can even connect a Scala REPL remotely to a running JVM. So I was just wondering, how many REPLs can you have in one JVM, and what is it bounded by?
It depends on how far you would like to go to achieve it. Technically I don't think there is any hard limit except for memory. Production grade application web servers (such as Tomcat) can run pretty much any code in a well-isolated environment inside single JVM (using custom ClassLoaders among other tricks). They obviously can run several copies of the same application.

Convert scala to native binary

What is the best way to convert scala (or bytecode) to native binary in order to increase performance
At this moment I see two solutions to convert jvm bytecode to sort of self-contained native binary:
Avian - lightweight embeddable JVM with AOT features
Excelsior JET - Commercial Java native compiler
Both should be compatible with Scala.
There are no direct native compilers for scala as I know. There some projects like Scala LLVM, but they are more about research and proof of concepts than ready to use tools
Although not a fully capable tool yet, the scala-native project is starting to become usable, though it's still at an early stage, it's under active development and is becoming more capable by the month. It's based on LLVM and clang, and will compile your scala sources to binaries, if the libraries you depend on are among those implemented at this early stage. (it's not yet working in Windows or cygwin, although it does work in the WSL environment).
Update: Windows support has been improving recently (fall 2021).
Whether performance is increased or not is a separate question, although most programs are likely to start up much more quickly.
Here's a link to the User's Guide
To create your own project: Minimal sbt project
The biggest limitations are that only a subset of java and scala standard libraries have been implemented so far, so you'll need to limit yourself to what's currently available, and not every project will only be feasible if you restrict yourself to 100% scala. Also, the documentation is a work in progress.
As a test, I created a command line tool for processing text files, and I was able to get it to work finally, although I did spend a bit of time figuring out how to accomplish various things, and mostly how to live with the available libraries. If necessary, you can also link to C/C++ libraries although I didn't need to for my small project.
Footnote as of June 2019: I'm having good luck with graalvm native-image. Here's the link:
https://www.graalvm.org/docs/reference-manual/aot-compilation/

How strongly is scala tied to JVM?

I have been wondering if Scala has any particular properties that make it inherently dependent on the JVM, or if it could be viable on top of something else. I can see how both the JVM's ubiquity and continued improvements, and the interoperability between Java and Scala, are strong arguments for this strategic choice. However, I understand that for this reason, compromises were made in the language design.
If the days of decline were to come for the JVM, would Scala go down with the ship, or could there be life after the JVM?
There were projects to have Scala running on .NET runtime (discontinued, the person who worked on it is improving the compiler backend for future versions of Scala) and LLVM (stuck). Moreover, there are several backends for Scala -> Javascript (e.g. scala js), so I would say it is possible to untie Scala from the JVM in some sense.
At the same time many Scala APIs depend on Java APIs, many optimizations and inner workings are implemented with respect to the JVM. There is a number of discussions on mailing lists on Scala without JVM, Scala with it's own virtual machine and so on, e.g this one, but as far as I know, the official statement is to support non-mainstream JVMs as well (Avian for example), rather than having an own runtime. This way Scala can be run on iOS and Android (and PCs of course).
As Simon Ochsenreither noted, Avian is not just yet-another JVM, but comes with some distinct advantages compared to HotSpot:
Ability to create native, self-contained, embeddable binaries
Runs on iPhone, Android and other ARM targets
AOT and JIT compilation are both fully supported
Support for tail calls and continuations
An intelligible code base
Responsive maintainers
Open for improvements (value classes, specialization, etc.)

What does Mirah offer over JRuby,Groovy and Scala?

What does Mirah language offer over JRuby,Groovy and Scala?
Unlike full-featured languages, which come with their own libraries, Mirrah is more like a different "frontend" to the Java libraries.
Mirrah code does not depend on it's own environment (except the Mirrah compiler at compile time).
That's the main benefit: A different syntax for Java.
According to an interview with Mirah's creator the point of Mirah (which means "ruby" in Javanese) is to create a high-performance variant of Ruby. Enough Ruby-like syntax to make it comfortable to work with, but still close enough to Java and JVM semantics so that it can run without the overhead of a big runtime layer on top of the JVM.
Choice quote:
Much of the benefit of Mirah over similar languages comes down to being so lightweight. In Groovy, Scala, JRuby, Clojure, or Jython, the minute you write "Hello, world", you've shackled yourself to a runtime library. In Mirah, "Hello, world" is just as terse as in JRuby, but has the added benefit of not foisting any dependencies on you; source file goes in, class file comes out, and that's it. I believe the JVM needs a new dependency-free language, and Mirah is my attempt to deliver one.
While JRuby's performance rivals or exceeds other Ruby interpreters, the fastest JRuby code still lags pure Java performance by an order of magnitude. While you can expect the performance of JRuby to improve with the 1.6 release, Mirah is an attempt to break through the performance ceiling and provide an option for programmers looking for execution speeds on par with Java code.
vs. Groovy
Syntax more familiar to existing Ruby/JRuby programmers
Statically typed
vs. JRuby
Statically typed
vs. Scala
Syntax more familiar to existing Ruby/JRuby programmers
The MAIN advantages are static typing (faster performance on the JVM and much easier interop with existing Java libraries) and a familiar syntax (if you come from Ruby).
When dependencies are a consideration (developing an android app, for example) then you shouldn't let this guide your language choice. Using a tool like Proguard will level the playing field.
If you're coming from Ruby, then Mirah is a good choice. If you're coming from Erlang or Haskell, then you'll want Scala. If you're a LISPer, then you'll want to take a look at Clojure.
If your only prior experience is Java then Shame on you! - and you should probably go for Scala - It's rapidly gaining a reputation as the heir apparent to Java, tool support is currently stronger and you'll be in a large community of others who made the same transition, so there are plenty of blogs/tutorials already available.
and Groovy? Groovy is almost never the right choice nowadays...
I use Mirah everyday on Google AppEngine.
Here are my reasons to use Mirah:
no runtime library
very nice syntax
as fast as Java
Having Java under the hood is very helpful too:
solid typesystem
well documented
known solutions for common problems
I did some Groovy, lot of JRuby and none of Scala.
If you know these, try Mirah.
If not, I'd go with JRuby.
Mirah is just another rubyish syntax for java. IMHO not good at all. It knows nothing about generics, and also has poor tooling. Better try ceylon, xtend, scala, kotlin etc.
Mirah compiles to java classes (not sources anymore). Xtend compiles to java sources and so simpler to found out what it does under the hood. Ceylon and scala have their own stdlibs (nevertheless java interop is near to perfect in them both), not sure about kotlin. Kotlin is JetBrains' child and thus tied to IDEA.
JRuby I don't like too. It has too many bugs in java interop. And it also have too many reinvented wheels. I mean encodings (it does not use java strings and regexes but custom strings on top of raw byte buffers), IOs, exception handling, threads etc.
The only advantage of jruby is that it is ruby. Many ruby code will just work as it is.
Groovy OTOH does not reinvent the wheel, it uses well tested java libraries and just adds syntax sugar too them. Also groovy-java interop is great. It can generics. Threads, exceptions, strings, collections - are just java classes as they are in java.