scala and akka on linux: compiling and executing - scala

I'm trying to run a scala/akka based program on a linux cluster machine. I was following the tutorial link (this was the only example I could find):
http://doc.akka.io/docs/akka/2.0/intro/getting-started-first-scala.html
It says to use the command to obtain the akka library:
git clone git://github.com/akka/akka.git
But it doesn't have any jars files inside it as per the tutorial.
I'm not able to get a basic akka-scala-sbt combo working on linux. So any help on this is much appreciated. I'm not aware (not able to find any clear source) of the commands needed to compile/execute with and without using SBT.
Java version: "1.8.0_31"
Scala version: "2.11.5"
Akka: I'm not sure, I did git clone, believe its the latest
SBT: 0.13.9
Java,Scala are already installed on the cluster, I had to just use module load.

You can start with this simple example: https://github.com/izmailoff/SmallAkkaProjectExample. It will help you to understand how to compile and run a project.
Usually the steps are:
Start SBT
Compile code
Run it from SBT
or:
Start SBT
Compile executable JAR
Run the JAR from command line
If you want a more advanced example take a look at: https://github.com/izmailoff/remote-akka-server-template
In either of these projects you don't need to download any libraries/JARs. SBT will download everything you need automatically.
In short, you need to understand how to build projects with SBT and how to run them - not related to Akka. Separately from that you need to know how Akka runs, i.e. ActorSystem, kernel, etc.

Related

SBT always downloads the packages/scala libraries on Docker, docker-compose

I have recently installed SBT on Docker Ubuntu machine to get started with Scala. When I started docker initially, it started grabbing all the Java, sbt JAR's from the remote locations (https://repo.scala-sbt.org/scalasbt/debian/sbt-0.13.17.deb).
But, whenever I run sbt command, it again starts downloading the sbt JAR. Is there a way of maintaining a global cache whereby artifacts are only downloaded once and not every time I remote to docker container?
My solution to this was a multi stage build.
Have a “base” docker image.
Copy in only build.sbt, projects.sbt and the file which sets out the sbt version from your project.
That defines the required dependencies. The last line in that base image is “sbt update” - I.e fetch them. That “base image” has the dependencies in it… and is reuseable. Just remember to run it when you change library versions etc to rebuild it.
In the ”build” image… copy over the project and proceed as normal… make sure sbt is resolving from maven-local, and it should use the “cache”… which is already in place from the paragraph above.
I’d be interested to hear other answers, but that’s my solution… YMMV :-).
That works for me on a cloud / Kube pipeline.

Run gatling project from an executable jar

I have a small Gatling project which I would like to package through sbt and then run on different Linux/Windows machines with different JVM parameters. I tried already the sbt package command but that didn't work out. Anyone has done something similar before?
Package doesn't include the dependencies of your project.
You need something like sbt-assembly, sbt-pack or sbt-native-packager.
Then you can start your tests from within a main method (depending on the type of your package, e.g. java -jar fat-jar-name.jar).

Running a jar with the right version of scala

I have a scala project that I can run with sbt. Now I want to execute it as a stand alone app. How can I run it with the scala command and ensure the right version of everything, in particular the scala-library and JVM?
PS: Just to clarify what I mean by an example from ruby.
I write some ruby script which I use routinely. For example, to apply updates a bit more cleanly then the OS does.
When I wrote the script I used ruby 1.8.3. ( Making version numbers up here as an example. ) I used the command line ruby my_script.
Now the version is ruby 2.0.1. The script will not run under this ruby because of language changes. So I use the command line ruby1.8.3 my_script.
Or rather I embed it in the shebang line.
Simalarly I want to run a jar using scala my_jar.jar but it was a built using a scala version 2.8.3 and will not run with that command. How do I make scala use the right version?
Two things you can try here:
1) The sbt-native-packager will create an executable. For example, you can create an RPM or Debian package.
2) You can create an "uber jar" using (say) sbt-assembly and package all your dependencies into a fat jar. You can specify a main function to be executed when running the jar (using the JVM java -jar a-jar-to-execute.jar command).
Both of the above are very active and widely used SBT plugins.
After your create the jar everything is in byte-code and you don't need to worry about Scala versions at that point. It will run as long as the Java (jvm) versions are compatible.

How to use existing Intellij projects as a I move forward with SBT?

Novice SBT question - Now that I've started with some basic SBT tutorials, I'd like to start using SBT build files (within Intellij) a lot more often. However, there's a couple of problems with this :
1) Existing projects that I currently publish to a jar, and later import into other projects... how do I publish this jar file to my local repository? SBT publish-local doesn't seem to fit my situation, because the project was made in Intellij and is not (yet) an SBT project.
2) Suppose I do convert the project to an SBT build setup (and then import it into Intellij).. how do I configure Intellij to to publish-local (update) each time I build the project? I do not see many configurable settings around SBT within the new Intellij SBT support.
Using Intellij 13 and SBT 0.13.1
Thanks!
to get you started up quickly on using SBT to drive Idea, have a look at my template project called skeleton
It supports most of the basic tasks you'd want to do.
To publish to your repository, use the publish task.
hope that helps!
For publishing, you simply use the publish action:
To specify the repository, assign a repository to publishTo and optionally set the publishing style. For example, to upload to Nexus:
publishTo := Some("Sonatype Snapshots Nexus" at "https://oss.sonatype.org/content/repositories/snapshots")
As for your second question, despite being a JetBrains fanboy, I have found SBT integration quite disappointing. For one thing, as the JetBrains documentation states itself, you need two plugins: their plugin and sbt-idea. You use sbt-idea to synchronize the IDEA module structure with the SBT build, and you use JetBrains' idea-sbt-plugin to execute SBT tasks in the "Before Launch" action in Run Configurations.
It sounds like you want to do an "install" on every build, so "Before Launch" action support isn't useful. I would suggest writing your own custom SBT task to install on build and using the Command Line Tools Console to execute that task with SBT as if from the command line. I know; that indirection is annoying.
Bear in mind one more thing. I have found numerous bugs with idea-sbt-plugin. At least on Mac. JetBrains told me the next version will be much better, and you can see for yourself with the next EAP version.
I certainly welcome others who have managed to have more success than I have to chime in.

Native libraries with Jenkins

I am using Jenkins as my CI server for a project that makes use of a native library. The project is in scala and I am using sbt to compile and run the unit tests. One of the libraries that I am using is a java (jni) wrapper around a c-library.
I have added the location of the library to the LD_LIBRARY_PATH and location of the jar to CLASSPATH in my .bashrc so that I can run the project and unit tests from the command line.
How do I do this for Jenkins?
I have recently had a problem when copying artifacts from remote nodes, which was fixed by adding the following to the advanced setting "JVM options" of the relevant node:
-Djava.library.path=/lib/x86_64-linux-gnu/
This is quite simple and is obvious to anyone reviewing the settings, for example when replicating the node configuration to use a similar machine. I do not recommend touching system- or user-wide scripts, in general.