How strongly is scala tied to JVM? - scala

I have been wondering if Scala has any particular properties that make it inherently dependent on the JVM, or if it could be viable on top of something else. I can see how both the JVM's ubiquity and continued improvements, and the interoperability between Java and Scala, are strong arguments for this strategic choice. However, I understand that for this reason, compromises were made in the language design.
If the days of decline were to come for the JVM, would Scala go down with the ship, or could there be life after the JVM?

There were projects to have Scala running on .NET runtime (discontinued, the person who worked on it is improving the compiler backend for future versions of Scala) and LLVM (stuck). Moreover, there are several backends for Scala -> Javascript (e.g. scala js), so I would say it is possible to untie Scala from the JVM in some sense.
At the same time many Scala APIs depend on Java APIs, many optimizations and inner workings are implemented with respect to the JVM. There is a number of discussions on mailing lists on Scala without JVM, Scala with it's own virtual machine and so on, e.g this one, but as far as I know, the official statement is to support non-mainstream JVMs as well (Avian for example), rather than having an own runtime. This way Scala can be run on iOS and Android (and PCs of course).
As Simon Ochsenreither noted, Avian is not just yet-another JVM, but comes with some distinct advantages compared to HotSpot:
Ability to create native, self-contained, embeddable binaries
Runs on iPhone, Android and other ARM targets
AOT and JIT compilation are both fully supported
Support for tail calls and continuations
An intelligible code base
Responsive maintainers
Open for improvements (value classes, specialization, etc.)

Related

How seamless will be dotty/scala3 integration with tech like scala-native and scala-js?

Are there any limitations we should be aware of? Will it require us to use some scalafix like tools? or will it work out of box?
Migration from 2.13 to 3.0 in general:
Dotty uses 2.13 collections so no need to change things here - as a matter of the fact 2.13 would be so close to 3.0 that maintainers decided to skip 2.14 release which was supposed to serve as a stepping stone
macros will need to be rewritten - that is the biggest issue, but library maintainers have some time to do it, and some are rewriting things even now (see quill)
there will be few deprecations, e.g. forSome syntax for existential types disappears (see: Dropped features on documentation)
libraries might need to extends themselves to support new stuff (union/intersection/opaque types) but until you start using new things in your code everything works as before
other than that old Scala code should work without any changes
Scalafix is being used on prod even now, e.g. Scala Steward is able to apply migrations as it updates libraries to a new version.
Scala.js is already supported as Dotty backend next to JVM one.
Recently Scala Center took over Scala-native, so we should expect that Scala-native development will speed up (it was a bit stalled) and it should eventually land as another supported backend. I cannot tell if they manage to deliver before the release of Dotty, but I doubt it. For now, Scala-native would have to get support for 2.12 and/or 2.13 first. Trace this issue if you want to know or ask on Gitter.
Long story short: you would need to wait for libraries you use to get ported to Dotty, then update your macros if you wrote any, besides that migration should be pretty much straightforward for JVM and JS backends. Scala native will probably take more time.

Scalability in Scala

I am going through Scala book by Martin Odersky.
It states that Scala language is highly scalable,reason being that it allows users to add new features which can be utilised as native language support.
It has got me confused with the term 'Scalability'.
I understand that scalability means ability of a software to handle huge amount of data.
So what's the difference here?
In the context of Scala, Odersky usually means that it is scalable in the sense that it can be used for a wide range of tasks, from simple scripting to large libraries to behemoth enterprise applications.
It's good for scripting because of its type inference, relatively low verbosity (compared to Java), and functional style (which generally lends itself to more concise code).
It's good for medium size applications and libraries because of its powerful type system, which means it is possible to write code that mostly or only produces errors at compile time rather than runtime (to the extent that is possible). The Play! framework in particular is founded on this philosophy. Furthermore, Scala runs on the JVM and therefore can harness any of the many, many Java libraries out there.
And it's good for enterprise software because it compiles to JVM bytecode, which already has a great track record in enterprise software; further, the fact that it's statically typed makes the maintenance of very large codebases much easier.
Scala is also applicable to a number of other areas, making it even more "scalable": concurrency/parallelism and domain-specific languages come to mind.
Here is a presentation by Odersky, if you start at slide 6 and go forward, you'll see him explain some other uses of Scala as well.

Scala Dependency Injection for compile time with separate configuration

Now firstly I realise the title is extremely broad, so let me describe the use case.
Background:
I'm currently teaching myself Scala+Gradle (because I like the flexibility and power of gradle and the much more legible build files)
As such with learning new languages its often best to make applications that you can actually use, and being primarily a PHP (with Symfony) programmer and formerly a Java programmer, there are many patterns that could carry across from both paradigms.
Use Case:
I'm writing an application where I am experimenting with a Provider+Interface(trait) layout, the goal is to define traits that encompass all the expected functionality for any particular type of component e.g. a ConfigReaderTrait and a YamlConfigReager as a provider. Theoretically the advantage of this would be to allow me to switch out core mechanisms or even architectural components with minimal effort, this allows for a great deal of R&D and experimenting.
PHP Symfony Influence
Now currently I work as a pure PHP dev, and as such has been introduced to Symfony, which has a brilliant Dependency Injection framework where the dependencies are defined in yaml files, and can be delegated to sub directories. I like this, because unlike with SBT I am unphased by using different languages for different purposes (eg groovy with gradle for build scripts) and I want to maintain a separation of concerns.
Such that each type of interface/trait or bundle of related functionality should be able to have its own DI config, and I would prefer it separate from the scala code itself.
Now for Scala....
Obviously things are not the same across different languages, and if you don't embrace the differences you may aswell go back to the previous language and leave things at that.
That said, I am not yet convinced by the DI frameworks I see for scala.
Guice for example is really a modified java framework (which is fine
because scala can use java libs, but because they don't function in
the entirely same paradigm of coding languages it feels as though
scala's capabilities are not leveraged)
MacWire annoyed me a bit,because you had to define the dependencies
in the files where you used them. Which does not assist in my
interface/provider concept.
SubCut so far seems to be the best suited to what I would expect.
But while going through all of this (and bare in mind this is all in the research phase, I havent used any of them yet) it seemed that DI in Scala is still very scattered, and in its infancy, by that I mean that there are different implementations with different applications, but not one flexible enough or power enough to compare to Symfonys DI. particularly not for my application.
Comments? Thoughts?
My 5 cents:
I have actually stopped using dependency injection frameworks after switching to Scala from Java.
The language allows for a few nice ways of doing it without a framework (multiple parameter lists and currying as well as the mixins for doing injection the way the 'cake pattern' does)
and I find myself more and more just using constructor or method parameter based injection as it clearly documents what dependencies a given piece of logic has and where it got those dependencies from.
It's also fairly easy to create different modules sets of implementations or factories for implementations using Scala objects and then selecting between those at runtime. This will give you the guarantee that it wont compile unless there is an implementation available, as opposed to the big ones in Java-land that will fail in runtime, effectively pushing a compile time problem into runtime.
This also removes the 'magic' of how dependencies are created and wired (reflection, runtime weaving, macros, XML, binding context to thread local etc). I think this makes it much easier for new developers to jump into a project and understand how the codebase is interconnected.
Regarding declaring implementations in non-code like XML I have found that projects rarely or never change those files without making a new release so then they might as well be code with all the benefits that bring (IDE support, performance, type checking).

Does Scala work well on proprietary JVM's?

My company has a large legacy Java code base and many of our customers run WebSphere and WebLogic. We are considering starting to use Scala but have been unable to confirm that Scala (2.9.X) works well with IBM's JDK (and BEA's JRockit).
Since these JVM's passes the TCK I would say that it should just work, but given the various problems I have had with the different JVM's over the years I am a little nervous. Are there any gotchas to be aware of when using scala with other JVM's ?
Any compiler flags to use (or avoid) ?
Should I compile the code using Scala on hotspot or on the customers JVM ?
Any problems with mixing JAR's compiled using different versions of Scala/Java on different JVM's ?
Any war stories, links and suggestions are welcome.
The Scala compiler should produce the same byte-code regardless of the JVM you use. I would expect Scala to run on all three platforms however HotSpot has tried to optimise for dynamic languages and might be slightly better. (Possibly not enough to worry about)
In recent years there has been less and less difference between these platforms and in the near future I expect them all to be directly based on OpenJDK (as IBM has agreed to support OpenJDK now) The JRockit and Hotspot teams have been merged for some time since Oracle owns both.
However if you are not running recent version of the JDK, you may see some issue.
JVMs talk to each other very well and I would consider running Scala in its own JVM to isolate any concerns you might have.
Yes, Scala works on non-Sun JVM. Consider, for instance, these two comments from the source code:
//print SourceAnnotation in a predefined way to insure
// against difference in the JVMs (e.g. Sun's vs IBM's)
// on IBM J9 1.6 do not use ForkJoinPool
There aren't many of these. After all, the various JVM are supposed to be compatible -- and tested for it. But, whereas issues arise, action is taken to make sure things run smoothly.
Nothing I could think of.
The compiler shouldn't make a difference, in fact if running scalac on different VM would generate different bytecode, it is definitely a bug.
You should always run Scala code with the same version of Scala it was compiled with. Code compiled on 2.x won't run on 2.x+1 by default. Code compiled on 2.x.y should run on 2.x.y+1, though.
I agree though, that it would be nice to get licenses from third-party vendors like IBM or Azul to include those platforms into testing.

What does Mirah offer over JRuby,Groovy and Scala?

What does Mirah language offer over JRuby,Groovy and Scala?
Unlike full-featured languages, which come with their own libraries, Mirrah is more like a different "frontend" to the Java libraries.
Mirrah code does not depend on it's own environment (except the Mirrah compiler at compile time).
That's the main benefit: A different syntax for Java.
According to an interview with Mirah's creator the point of Mirah (which means "ruby" in Javanese) is to create a high-performance variant of Ruby. Enough Ruby-like syntax to make it comfortable to work with, but still close enough to Java and JVM semantics so that it can run without the overhead of a big runtime layer on top of the JVM.
Choice quote:
Much of the benefit of Mirah over similar languages comes down to being so lightweight. In Groovy, Scala, JRuby, Clojure, or Jython, the minute you write "Hello, world", you've shackled yourself to a runtime library. In Mirah, "Hello, world" is just as terse as in JRuby, but has the added benefit of not foisting any dependencies on you; source file goes in, class file comes out, and that's it. I believe the JVM needs a new dependency-free language, and Mirah is my attempt to deliver one.
While JRuby's performance rivals or exceeds other Ruby interpreters, the fastest JRuby code still lags pure Java performance by an order of magnitude. While you can expect the performance of JRuby to improve with the 1.6 release, Mirah is an attempt to break through the performance ceiling and provide an option for programmers looking for execution speeds on par with Java code.
vs. Groovy
Syntax more familiar to existing Ruby/JRuby programmers
Statically typed
vs. JRuby
Statically typed
vs. Scala
Syntax more familiar to existing Ruby/JRuby programmers
The MAIN advantages are static typing (faster performance on the JVM and much easier interop with existing Java libraries) and a familiar syntax (if you come from Ruby).
When dependencies are a consideration (developing an android app, for example) then you shouldn't let this guide your language choice. Using a tool like Proguard will level the playing field.
If you're coming from Ruby, then Mirah is a good choice. If you're coming from Erlang or Haskell, then you'll want Scala. If you're a LISPer, then you'll want to take a look at Clojure.
If your only prior experience is Java then Shame on you! - and you should probably go for Scala - It's rapidly gaining a reputation as the heir apparent to Java, tool support is currently stronger and you'll be in a large community of others who made the same transition, so there are plenty of blogs/tutorials already available.
and Groovy? Groovy is almost never the right choice nowadays...
I use Mirah everyday on Google AppEngine.
Here are my reasons to use Mirah:
no runtime library
very nice syntax
as fast as Java
Having Java under the hood is very helpful too:
solid typesystem
well documented
known solutions for common problems
I did some Groovy, lot of JRuby and none of Scala.
If you know these, try Mirah.
If not, I'd go with JRuby.
Mirah is just another rubyish syntax for java. IMHO not good at all. It knows nothing about generics, and also has poor tooling. Better try ceylon, xtend, scala, kotlin etc.
Mirah compiles to java classes (not sources anymore). Xtend compiles to java sources and so simpler to found out what it does under the hood. Ceylon and scala have their own stdlibs (nevertheless java interop is near to perfect in them both), not sure about kotlin. Kotlin is JetBrains' child and thus tied to IDEA.
JRuby I don't like too. It has too many bugs in java interop. And it also have too many reinvented wheels. I mean encodings (it does not use java strings and regexes but custom strings on top of raw byte buffers), IOs, exception handling, threads etc.
The only advantage of jruby is that it is ruby. Many ruby code will just work as it is.
Groovy OTOH does not reinvent the wheel, it uses well tested java libraries and just adds syntax sugar too them. Also groovy-java interop is great. It can generics. Threads, exceptions, strings, collections - are just java classes as they are in java.