How to run Scala 3 applications in the command line with Coursier - scala

If you follow the steps at the official Scala 3 sites, like Dotty or Scala Lang then it recommends using Coursier to install Scala 3. The problem is that neither or these explain how to run a compiled Scala 3 application after following the steps.
Scala 2:
> cs install scala
> scalac HelloScala2.scala
> scala HelloScala2
Hello, Scala 2!
Scala 3:
> cs install scala3-compiler
> scala3-compiler HelloScala3.scala
Now how do you run the compiled application with Scala 3?

Currently there does not seem to be a way to launch a runner for Scala 3 using coursier, see this issue. As a workaround, you can install the binaries from the github release page. Scroll all the way down passed the contribution list to see the .zip file and download and unpack it to some local folder. Then put the unpacked bin directory on your path. After a restart you will get the scala command (and scalac etc) in terminal.
Another workaround is using the java runner directly with a classpath from coursier by this command:
java -cp $(cs fetch -p org.scala-lang:scala3-library_3:3.0.0):. myMain
Replace myMain with the name of your #main def function. If it is in a package myPack you need to say myPack.myMain (as usual).

Finally, it seems that is possible to run scala application like scala 2 version using scala3 in Coursier:
cs install scala3
Then, you can compile it with scala3-compiler and run with scala3:
scala3-compiler Main.scala
scala3 Main.scala

This work-around seems to work for me:
cs launch scala3-repl:3+ -M dotty.tools.MainGenericRunner -- YourScala3File.scala
This way, you don't even have to compile the source code first.
In case your source depends on third-party libraries, you can specify the dependencies like this:
cs launch scala3-repl:3+ -M dotty.tools.MainGenericRunner -- -classpath \
$(cs fetch --classpath io.circe:circe-generic_3:0.14.1):. \
YourScala3File.scala
This would be an example where you use the circe library that's compiled with Scala 3. You should be able to specify multiple third-party libraries with the fetch sub-command.

Related

How to install scala 2.12

There are multiple binary incompatible scala 2 versions, however the document says the installation is either via IDE or SBT.
DOWNLOAD SCALA 2
Then, install Scala:...either by installing an IDE such as IntelliJ, or sbt, Scala's build tool.
Spark 3 needs Scala 2.12.
Spark 3.1.2 uses Scala 2.12. You will need to use a compatible Scala version (2.12.x).
Then how can we make sure the scala version is 2.12 if we install sbt?
Or the documentation is not accurate and it should be "to use specific version of scala, need to download specific scala version on your own"?
Updates
As per the answer by mario-galic, in ONE-CLICK INSTALL FOR SCALA it is said:
Installing Scala has always been a task more challenging than necessary, with the potential to drive away beginners. Should I install Scala itself? sbt? Some other build tools? What about a better REPL like Ammonite? Oh and before all that I need to install Java?
To solve this problem, the Scala Center contracted Alexandre Archambault in January 2020 to add a one-click install of Scala through coursier. For example, on Linux, all we now need is:
$ curl -Lo cs https://git.io/coursier-cli-linux && chmod +x cs && ./cs setup
The Scala version is specified in the build.sbt file so SBT will download the appropriate version of Scala as necessary.
I personally use SDKMAN! to install Java and then SBT.
The key concept to understand is the difference between system-wide installation and project-specific version. System-wide installation ends up somewhere on the PATH like
/usr/local/bin/scala
and can be installed in various ways, personally I recommend coursier one-click install for Scala
curl -Lo cs https://git.io/coursier-cli-linux && chmod +x cs && ./cs setup
Project-specific version is specified by scalaVersion sbt settings which downloads Scala to coursier cache location. To see the Scala version and location used by the particular project try show scalaInstance which
inspect scalaInstance
[info] Task: sbt.internal.inc.ScalaInstance
[info] Description:
[info] Defines the Scala instance to use for compilation, running, and testing.
Scala should be binary compatible within minor version so Spark 3 or any other software built against any Scala 2.12.x version should work with any other Scala 2.12.x version where we have major.minor.patch. Note binary compatibility is not guaranteed for internal compiler APIs, so for example when publishing compiler plugins the best practice is to publish it against full specific Scala version. For example notice how kind-projector compiler plugin is published against full Scala version 2.13.6
https://repo1.maven.org/maven2/org/typelevel/kind-projector_2.13.6/
whilst scala-cats application-level library is published against any Scala 2.13.x version
https://repo1.maven.org/maven2/org/typelevel/cats-core_2.13/
Similarly spark is published against any Scala 2.12.x version
https://repo1.maven.org/maven2/org/apache/spark/spark-core_2.12/
Regarding system-wide installation one trick I do for quick switching of versions is to put scala-runners on the PATH and then different versions can be launched via --scala-version argument
scala --scala-version 2.12.14
Using coursier or scala-runners you can even switch JDK quickly via -C--jvm for example
scala --scala-version 2.12.14 -C--jvm=11
For a project, there should be no need to download manually a specific version of Scala. sbt either directly or indirectly via an IDE will download all the dependencies behind the scenes for you, so the only thing to specify is sbt setting scalaVersion.
Using Python as analogy to Scala, and Pipenv as anology to sbt, then python_version in Pipfile is similar to scalaVersion in build.sbt. After executing pipenv shell and pipenv install you end up with project-specific shell environment with project specific Python version and dependencies. sbt similarly downloads project specific Scala version and dependencies based of build.sbt although it has no need for lock files or for modifying your shell environment.

How to setup different Scala versions on the same machine?

I want to follow the book on Scala[1] but it uses Scala 3 and I have Scala 2 installed. I want to use both the versions, something on the lines of python2 and python3.
I tried installing Scala3 on my local using the official source but I could only grasp the project-level working directory. The sbt prompt does not work like a REPL would and I can only open REPL using Scala 2 (I checked the version everytime).
How do I open the REPL of Scala3 given I cannot uninstall Scala2?
The sbt prompt does not work like a REPL
If you execute sbt console from within project directory it will drop you into REPL version corresponding to the project's scalaVersion. For example, executing sbt console within project created with sbt new lampepfl/dotty.g8 would start Scala 3 REPL.
but I could only grasp the project-level working directory
For system-wide installation first install coursier and then execute cs install scala3-repl. This will install Scala 3 REPL alongside the Scala 2 one. Now Scala 3 REPL can be started with scala3-repl command whilst Scala 2 REPL simply with scala command.

Run Spark in standalone mode with Scala 2.11?

I follow the instructions to build Spark with Scala 2.11:
mvn -Dscala-2.11 -DskipTests clean package
Then I launch per instructions:
./sbin/start-master.sh
It fails with two lines in the log file:
Failed to find Spark assembly in /etc/spark-1.2.1/assembly/target/scala-2.10
You need to build Spark before running this program.
Obviously, it's looking for a scala-2.10 build, but I did a scala-2.11 build. I tried the obvious -Dscala-2.11 flag, but that didn't change anything. The docs don't mention anything about how to run in standalone mode with scala 2.11.
Thanks in advance!
Before building you must run the script under:
dev/change-version-to-2.11.sh
Which should replace references to 2.10 with 2.11.
Note that this will not necessarily work as intended with non-GNU sed (e.g. OS X)

How to use third party libraries with Scala REPL?

I've downloaded Algebird and I want to try out few things in the Scala interpreter using this library. How do I achieve this?
Of course, you can use scala -cp whatever and manually manage your dependencies. But that gets quite tedious, especially if you have multiple dependencies.
A more flexible approach is to use sbt to manage your dependencies. Search for the library you want to use on search.maven.org. Algebird for example is available by simply searching for algebird. Then create a build.sbt referring to that library, enter the directory and enter sbt console. It will download all your dependencies and start a scala console session with all dependencies automatically on the classpath.
Changing things like the scala version or the library version is just a simple change in the build.sbt. To play around you don't need any scala code in your directory. An empty directory with just the build.sbt will do just fine.
Here is a build.sbt for using algebird:
name := "Scala Playground"
version := "1.0"
scalaVersion := "2.10.2"
libraryDependencies += "com.twitter" % "algebird-core" % "0.2.0"
Edit: often when you want to play around with a library, the first thing you have to do is to import the namespace(s) of the library. This can also be automated in the build.sbt by adding the following line:
initialCommands in console += "import com.twitter.algebird._"
Running sbt console will not import libraries declared with a test scope. To use those libraries in the REPL, start the console with
sbt test:consoleQuick
You should be aware, however, that starting the console this way skips compiling your test sources.
Source: http://www.scala-sbt.org/0.13/docs/Howto-Scala.html
You can use the scala's -cp switch to keep jars on the classpath. There are other switches available too, for example, -deprecation and -unchecked for turning on various warnings. Many more to be found with scala -X... and scala -Y.... You can find out more information about these switches with scala -help
This is an answer using Ammonite (as opposed to the Scala REPL) - but it is such a great tool that it is worth mentioning.
You can install it with a one liner such as:
sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/2.1.2/2.13-2.1.2) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm
or using brew on macOS:
brew install ammonite-repl
For scala 2.10, you need to use an oder version 1.0.3:
sudo sh -c '(echo "#!/usr/bin/env sh" && curl -L https://github.com/lihaoyi/Ammonite/releases/download/1.0.3/2.10-1.0.3) > /usr/local/bin/amm && chmod +x /usr/local/bin/amm' && amm
Run Ammonite in your terminal:
amm
// Displays
Loading...
Welcome to the Ammonite Repl 2.1.0 (Scala 2.12.11 Java 1.8.0_242)
Use in ivy import to import your 3rd part library:
import $ivy.`com.twitter::algebird-core:0.2.0`
Then you can use your library within the Ammonite-REPL:
import com.twitter.algebird._
import com.twitter.algebird.Operators._
Map(1 -> Max(2)) + Map(1 -> Max(3)) + Map(2 -> Max(4))
...

Scala : trying to get log4j working

Scala newb here (it's my 2nd day of using it). I want to get log4j logging working in my Scala script. The script and the results are below, any ideas as to what's going wrong?
[sean#ibmp2 pybackup]$ cat backup.scala
import org.apache.log4j._
val log = LogFactory.getLog()
log.info("started backup")
[sean#ibmp2 pybackup]$ scala -cp log4j-1.2.16.jar:. backup.scala
/home/sean/projects/personal/pybackup/backup.scala:1: error: value apache is not a member of package org
import org.apache.log4j._
^
one error found
I reproduce it under Windows: delimiter of '-classpath' must be ';' there (not ':'). Are you use cygwin or some sort of unix emulator?
But Scala script works anywhere without current dir in classpath. Try to use:
$ scala -cp log4j-1.2.16.jar backup.scala
JFI: LogFactory is a class of slf4j library (not log4j).
UPDATE
Another possible case: broken jar in classpath, maybe during download or something else. Scala interpreter does report only about unavailable member of the package.
$ echo "qwerty" > example.jar
$ scala -cp example.jar backup.scala
backup.scala:1: error: value apache is not a member of package org
...
Need to inspect content of the jar-file:
$ jar -tf log4j-1.2.16.jar
...
org/apache/log4j/Appender.class
...
Did you remember to put log4j.jar in your classpath?
Had Similar issue when started doing Scala Development using Eclipse, doing a clean build solved the problem.
Guess the Scala tools are not matured et.
Instead of using log4j directly, you might try using Configgy. It's the Scala Way™ to work with log4j, as well as configuration files. It also plays nicely with SBT and Maven.
I asked and answered this question myself, have a look:
Put it under src/main/resources/logback.xml. It will be copied to the right location when SBT is doing the artifact assembly.