I have noticed that there can be two scala REPL in one JVM and you can even connect a Scala REPL remotely to a running JVM. So I was just wondering, how many REPLs can you have in one JVM, and what is it bounded by?
It depends on how far you would like to go to achieve it. Technically I don't think there is any hard limit except for memory. Production grade application web servers (such as Tomcat) can run pretty much any code in a well-isolated environment inside single JVM (using custom ClassLoaders among other tricks). They obviously can run several copies of the same application.
Related
Say, you have sbt-based project targeting some distributed aims. That is, your project contains Play application(s) (with some subprojects hierarchy) as well as, say, few other own services, tools to fill test data in, load and so on.. You see, to develop the project you inevitably want to start many simultaneous main()s.
At the moment I have decided the problem this way: sbt terminal session is used to run Play, and Scala IDE is used for others. To eliminate any clashes I was forced to write own template engine and router for Play (that is eliminating managed sources in Play's terms).
OTOH, I don't want to be strongly sticked to the Scala IDE (or any IDE) having an opportunity to start simultaneously many main()s (tracking output of each of them) in terms of sbt sessions themselves.
What is your sbt-based developer's environment for distributed systems developing?
While not immediately helpful, you may be interested in the backgroundRun task some of us are working on adding to sbt. For now the API is in this plugin code:
https://github.com/sbt/sbt-remote-control/blob/master/ui-interface/src/main/scala/sbt/BackgroundRun.scala
It's just a version of run that doesn't
block and you can manage jobs much as you can in bash.
Supporting Play is harder than supporting generic run, it requires this PR: https://github.com/playframework/playframework/pull/3591
Anyway this isn't all published and working but working on it.
The best short term solution I think is to run multiple instances of sbt and be careful not to compile in more than one of them at once. If you get any weird build or classpath errors, do a clean and be sure it wasn't caused by multiple compiles stomping each other.
We are building a multiple-component Akka-cluster-based system. Every component is a separate Play! project and on start it joins the Akka Cluster and components start looking-up actors to work.
Problem
I have two problems with this setup:
Writing test code is very hard (we didn't figure out this yet), specially when writing tests that rely on multiple actors coming from different components. How can we solve dependency and create a proper cluster in the test-code (between two play applications!)
During development, every developer has to start multiple Sbt instances to boot the system (different play projects) and this eats the entire system memory and development becomes incredible slow.
What I'm Looking For
I was thinking of using the cluster "roles" property to do selective boot-up, this means that there is only a single Play project and components are (modules) and on-boot of the play project I will scan the current "roles" property of this instance and based on that I start or stop certain components/actors.
This will make testing easier but I don't know exactly how to do this in Play, specially that some components actually offer RESTful API (Play Controllers) and I don't know how to enable/disable routes and controllers on boot-time of play.
Is there any document or something that shows the right way to build a modular distributed system or any clues? (something also that relates to how SBT should be setup?
Edit 1: The project lives in a single repository and has a single sbt build (multiple-projects)
This is a good question, and I’ll answer it in parts, although I am not a Play expert.
1 – Writing Tests
I would recommend testing modules in isolation to avoid the exponential explosion of necessary test cases. To this end actors are a very nice abstraction because you can trivially mock any actor by injecting a TestProbe instead of the real ActorRef. In a cluster you will typically want to look up services on other nodes, which means that in a test you construct your probe and inject its path (probe.ref.path) instead of the path you would look up in the production system.
The second aspect concerns integration tests for which you want multiple services to participate. In this case you don’t need to start a “proper” cluster involving multiple JVMs, you can just spin up multiple ActorSystems within your test and have them communicate on "localhost".
2 – Development Deployments
It is not necessary to run multiple instances of sbt, you can just create a suitable Main class which starts all required ActorSystems within the same process, just like for the tests as mentioned above.
3 – Node Role Management
The ActorSystem managed by Play will typically have a “frontend” role. In addition to that one you can start more systems with different roles, which are not Play applications by themselves. Triggering different behavior—starting different services and initiating different activities—makes sense based on the node’s role, we do that ourselves in tests and real applications.
On the question of disabling certain routes for certain roles I do not know enough to answer.
I have been wondering if Scala has any particular properties that make it inherently dependent on the JVM, or if it could be viable on top of something else. I can see how both the JVM's ubiquity and continued improvements, and the interoperability between Java and Scala, are strong arguments for this strategic choice. However, I understand that for this reason, compromises were made in the language design.
If the days of decline were to come for the JVM, would Scala go down with the ship, or could there be life after the JVM?
There were projects to have Scala running on .NET runtime (discontinued, the person who worked on it is improving the compiler backend for future versions of Scala) and LLVM (stuck). Moreover, there are several backends for Scala -> Javascript (e.g. scala js), so I would say it is possible to untie Scala from the JVM in some sense.
At the same time many Scala APIs depend on Java APIs, many optimizations and inner workings are implemented with respect to the JVM. There is a number of discussions on mailing lists on Scala without JVM, Scala with it's own virtual machine and so on, e.g this one, but as far as I know, the official statement is to support non-mainstream JVMs as well (Avian for example), rather than having an own runtime. This way Scala can be run on iOS and Android (and PCs of course).
As Simon Ochsenreither noted, Avian is not just yet-another JVM, but comes with some distinct advantages compared to HotSpot:
Ability to create native, self-contained, embeddable binaries
Runs on iPhone, Android and other ARM targets
AOT and JIT compilation are both fully supported
Support for tail calls and continuations
An intelligible code base
Responsive maintainers
Open for improvements (value classes, specialization, etc.)
My company has a large legacy Java code base and many of our customers run WebSphere and WebLogic. We are considering starting to use Scala but have been unable to confirm that Scala (2.9.X) works well with IBM's JDK (and BEA's JRockit).
Since these JVM's passes the TCK I would say that it should just work, but given the various problems I have had with the different JVM's over the years I am a little nervous. Are there any gotchas to be aware of when using scala with other JVM's ?
Any compiler flags to use (or avoid) ?
Should I compile the code using Scala on hotspot or on the customers JVM ?
Any problems with mixing JAR's compiled using different versions of Scala/Java on different JVM's ?
Any war stories, links and suggestions are welcome.
The Scala compiler should produce the same byte-code regardless of the JVM you use. I would expect Scala to run on all three platforms however HotSpot has tried to optimise for dynamic languages and might be slightly better. (Possibly not enough to worry about)
In recent years there has been less and less difference between these platforms and in the near future I expect them all to be directly based on OpenJDK (as IBM has agreed to support OpenJDK now) The JRockit and Hotspot teams have been merged for some time since Oracle owns both.
However if you are not running recent version of the JDK, you may see some issue.
JVMs talk to each other very well and I would consider running Scala in its own JVM to isolate any concerns you might have.
Yes, Scala works on non-Sun JVM. Consider, for instance, these two comments from the source code:
//print SourceAnnotation in a predefined way to insure
// against difference in the JVMs (e.g. Sun's vs IBM's)
// on IBM J9 1.6 do not use ForkJoinPool
There aren't many of these. After all, the various JVM are supposed to be compatible -- and tested for it. But, whereas issues arise, action is taken to make sure things run smoothly.
Nothing I could think of.
The compiler shouldn't make a difference, in fact if running scalac on different VM would generate different bytecode, it is definitely a bug.
You should always run Scala code with the same version of Scala it was compiled with. Code compiled on 2.x won't run on 2.x+1 by default. Code compiled on 2.x.y should run on 2.x.y+1, though.
I agree though, that it would be nice to get licenses from third-party vendors like IBM or Azul to include those platforms into testing.
Say the use scala to process incoming email etc.
In what context is (or could) they be running scala?
Can it run inside its own daemon?
Can it run inside of tomcat?
Or would you use it in a cron job?
or is it all of the above? :)
Sorry this is an open question, and I don't know much about scala, but I just want an idea of how one could utilize scala and under what context it can run in.
Scala is a general purpose language, and can be used pretty much everywhere that doesn't have restrictions of its own. One limitation it has is that it must be backed by a Java VM (or .Net if you go with the experimental stuff), which can bring limitations of its own.
When people said "backend", they usually mean using Scala to provide services to other software, leaving user-facing layers to other languages. This combination leverages the speed advantages of static typing in Scala, to benefits in the speed of development and interaction other languages -- PHP or Ruby, for example -- might provide for UI front ends.