Include jar file in Scala interpreter - scala

Is it possible to include a jar file run running the Scala interpreter?
My code is working when I compile from scalac:
scalac script.scala -classpath *.jar
But I would like to be able to include a jar file when running the interpreter.

In scala2.8,you can use
scala>:jar JarName.jar
to add a jar to the classpath.
In Scala 2.8.1, it is not :jar but :cp
And in Scala 2.11.7 it is not :cp but :re(quire)

According to scala executable help all options of scalac are allowed ,
so you can run scala -classpath some.jar, i've just tried and it looks like it works

Include multiple jars int Scala REPL 2.10.0-RC2
scala -classpath my_1st.jar:my_2nd.jar:my_3rd.jar

in my case i am using Scala code runner version 2.9.2. and i had to add quotation marks.
I am using this jar files:
jdom-b10.jar, rome-0.9.jar
and everything goes fine with this:
scala -classpath "*.jar" feedparser.scala

In Scala version 2.11.6 from scala REPL use :require, can best be figured out by using :help from REPL
For example:
$ scala
Welcome to Scala version 2.11.6 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_45).
Type in expressions to have them evaluated.
Type :help for more information.
scala> :require lift-json_2.11-3.0-M5-1.jar
Added '<path to lift json library>/lift-json/lift-json_2.11-3.0-M5-1.jar' to classpath.

Scala version 2.11.5:
Here is an example of adding all jars in your ivy cache:
scala -cp /Users/dbysani/.ivy2/cache/org.apache.spark/spark-streaming_2.10/jars/*
scala> import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.StreamingContext
You can also create a local folder of all the jars that you need to get added and add it in a similar way.
Hope this helps.

"lib/*.jar" generates a list with blank between items not ":" or ";" as required.
Since Java 6 "lib/*" should work, but sometimes doesn't (classpath is set somewhere else)
I use a script like:
Windows:
#rem all *.jars in lib subdirectory
#echo off
set clp=.
for %%c in (lib\*.jar) do call :Setclasspath %%c
echo The classpath is %clp%
scala -classpath %clp% script.scala
exit /B %ERRORLEVEL%
:Setclasspath
set clp=%clp%;%~1
exit /B 0
Linux:
#!/bin/bash
#all *.jars in lib subdirectory
clp="."
for file in lib/*
do
clp="$clp:$file"
done
echo $clp
scala -classpath $clp script.scala

Related

How do I make packages available to the Scala REPL?

I'm trying to get familiar with Scala. I am using macOS.
I've installed scala using brew install scala which is hassle-free and once complete I can launch the scala REPL simply by issuing scala and I'm at the scala> prompt.
I now want to import some packages, so I try:
import org.apache.spark.sql.Column
and unsurprisingly it fails with
error: object apache is not a member of package org
This makes sense, how would it know where to get that package from? Thing is, I don't know what I need to do to make that package available. Is there anything I can do from the command-line that would allow me to import org.apache.spark.sql.Column?
I have googled around a little but not found anything that explains in a jargon-free way. Complete Scala noob here so jargon-free responses would be appreciated.
Here are two ways to start a REPL with dependencies that I'm aware of:
Use SBT to manage dependencies, use console to start a REPL with those dependencies
Use Ammonite REPL
You could create a separate directory with a build.sbt where you set
scalaVersion := "2.11.12"
and then copy the
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.0"
snippets from MavenCentral. Then you can run the REPL with sbt console. Note that this will create a project and target subdirectories, so it "leaves traces", you can't use it like the standalone scala-repl. You could also omit the build.sbt, and add the library-dependencies by typing them into the SBT-shell itself.
Alternatively you can just use Ammonite REPL that has been created exactly for that purpose.
You can use classpath to make the lib available i.e. download the jar locally and use the command as follows (here I'm using Apache IO lib to move files from scala prompt )
C0:Desktop pvangala$ scala -cp /Users/pvangala/Downloads/commons-io-2.6/commons-io-2.6.jar
Welcome to Scala 2.12.5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_161).
Type in expressions for evaluation. Or try :help.
scala> import java.io.File
import java.io.File
scala> val src = new File("/Users/pvangala/Downloads/commons-io-2.6-bin.tar")
src: java.io.File = /Users/pvangala/Downloads/commons-io-2.6-bin.tar
scala> val dst = new File("/Users/pvangala/Desktop")
dst: java.io.File = /Users/pvangala/Desktop
scala> org.apache.commons.io.FileUtils.moveFileToDirectory(src, dst, true)
If you want to use spark stuff I'd recommend you use the spark-shell that comes with the spark-installation. I don't know macOS so I can't help you much with the install of Spark there.
For normal Scala I recommend ammonite http://ammonite.io/#Ammonite-REPL that has included syntax to allow to pull packages/dependencies.
If you want to use spark, you should use the spark-shell instead the scala REPL. It has almost the same behaviour but includes all the spark dependencies by default.
You should download spark binaries from here
Then if you are using Linux, you should create the variable SPARK_HOME pointing to the downloaded folder and include its bin folder in PATH.
then you can start it in any console with the command spark-shell
In Windows i'm not sure, but i think that you should have a spark-shell.cmd file or something similar which you can use to start the spark-shell,
I did the following in Windows:
for /f "tokens=*" %%a in ('java -jar coursier fetch -p "com.lihaoyi::requests:0.2.0" "com.lihaoyi::upickle:0.7.5"') do set SCP=%%a
scala -nc -classpath %SCP% %1 %2 %3
Instead of the two libraries listed here you can use an unlimited number of other libraries you need. They must be available in maven central, though. The %1 could be a scala script (".sc" extension). But you could leave it empty and the REPL will start with the libraries on the classpath.

Add multiple classpath entries to scala REPL classpath

:cp seems to only accept a single entry
scala> :cp /usr/lib/hadoop/*:/usr/lib/hadoop/lib/*:/usr/lib/hbase/*:/usr/lib/hbase/lib/*:
/home/sboesch/spark-master/lib_managed/jars/*:/home/sboesch/spark-master/lib_managed/bundles/*:
The path '/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*:/usr/lib/hbase/*:/usr/lib/hbase/lib/*:/home/sboesch/spark-master/lib_managed/jars/*:/home/sboesch/spark-master/lib_managed/bundles/*:'
doesn't seem to exist.
Any thoughts on how to do this when already in the REPL. Yes I know how to set it up from outside the REPL :
CLASSPATH=/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*:/usr/lib/hbase/*:/usr/lib/hbase/lib/*
:/home/sboesch/spark-master/lib_managed/jars/*:
/home/sboesch/spark-master/lib_managed/bundles/*: scala
EDIT It seems the intent were not clear. I am working on code in the REPL. Then have a new snippet of code that requires a few classpath entries. It is a ONE OFF affair: so I do not want to add to build.sbt or to the scala/lib dir , etc. I did not receive any answer really satisifying this use case, but awarded the best efforts anyways.
scala -cp "path1:path2" now seems to work.
scala -version Picked up _JAVA_OPTIONS: -Xms512m -Xmx4096m
-XX:MaxPermSize=1024m -XX:ReservedCodeCacheSize=128m Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=1024m; support
was removed in 8.0 Scala code runner version 2.11.8 -- Copyright
2002-2016, LAMP/EPFL
The help text for :cp says:
:cp <path> add a jar or directory to the classpath
So I'm guessing there's no exact way for you to get that. I'd use this:
:load <path> interpret lines in a file
I confirmed that it works for REPL commands as well as Scala code.
Addendum:
If you use SBT then all your projects dependencies are in the class-path for the REPL launched by SBT's console task.
A quick and dirty approach, add a link from $SCALA_HOME/lib/ to a folder with additional jar files. Then from REPL you can import packages of interest.

How BTrace's -classpath param support many jar files?

Sometimes we need to depend third part jar file when using BTrace.
Maybe i need import a.jar and b.jar to support BTrace script.How could i spell the -classpath param?
I have fixed this problem.
The -classpath param can be multi path of jar file.
In windows like -classpath ./a.jar;./b.jar
And in linux like -classpath ./a.jar:./b.jar

How to fire up Scala interpreter with ScalaCheck in the classpath in Ubuntu 11.10?

Scala is installed and working fine.
scalacheck.jar is placed in the /bin .
I used the following command
$ scala -cp scalacheck.jar
After that, when i tried the below command,
scala> import org.scalacheck.Prop.forAll
I got the following error.
<console>:7: error: object scalacheck is not a member of package org
import org.scalacheck.Properties
^
I might have done some mistake in using scalacheck, please correct me and give the proper commands so that I can able to work with scalacheck in Ubuntu in interpreter mode.
Putting executable on the path isn't the same as jar being on the classpath, so your jar being in /bin didn't change anything.
Just use:
scala -cp path_to_your.jar
and you should be fine.
If for example, your scalachek.jar is in /bin then use:
scala -cp /bin/scalacheck.jar
edit:
Putting jars in /bin probably isn't the best idea.
You can use it like this:
kjozsa#walrus:~$ scala -version
Scala code runner version 2.9.2 -- Copyright 2002-2011, LAMP/EPFL
kjozsa#walrus:~$ locate scalacheck.jar
/usr/share/scala/lib/scalacheck.jar
kjozsa#walrus:~$ scala -cp /usr/share/scala/lib/scalacheck.jar
Welcome to Scala version 2.9.2 (OpenJDK 64-Bit Server VM, Java 1.7.0_03-icedtea).
Type in expressions to have them evaluated.
Type :help for more information.
scala> import org.scalacheck.Prop.forAll
import org.scalacheck.Prop.forAll
scala>

Scala : trying to get log4j working

Scala newb here (it's my 2nd day of using it). I want to get log4j logging working in my Scala script. The script and the results are below, any ideas as to what's going wrong?
[sean#ibmp2 pybackup]$ cat backup.scala
import org.apache.log4j._
val log = LogFactory.getLog()
log.info("started backup")
[sean#ibmp2 pybackup]$ scala -cp log4j-1.2.16.jar:. backup.scala
/home/sean/projects/personal/pybackup/backup.scala:1: error: value apache is not a member of package org
import org.apache.log4j._
^
one error found
I reproduce it under Windows: delimiter of '-classpath' must be ';' there (not ':'). Are you use cygwin or some sort of unix emulator?
But Scala script works anywhere without current dir in classpath. Try to use:
$ scala -cp log4j-1.2.16.jar backup.scala
JFI: LogFactory is a class of slf4j library (not log4j).
UPDATE
Another possible case: broken jar in classpath, maybe during download or something else. Scala interpreter does report only about unavailable member of the package.
$ echo "qwerty" > example.jar
$ scala -cp example.jar backup.scala
backup.scala:1: error: value apache is not a member of package org
...
Need to inspect content of the jar-file:
$ jar -tf log4j-1.2.16.jar
...
org/apache/log4j/Appender.class
...
Did you remember to put log4j.jar in your classpath?
Had Similar issue when started doing Scala Development using Eclipse, doing a clean build solved the problem.
Guess the Scala tools are not matured et.
Instead of using log4j directly, you might try using Configgy. It's the Scala Way™ to work with log4j, as well as configuration files. It also plays nicely with SBT and Maven.
I asked and answered this question myself, have a look:
Put it under src/main/resources/logback.xml. It will be copied to the right location when SBT is doing the artifact assembly.