I follow the instructions to build Spark with Scala 2.11:
mvn -Dscala-2.11 -DskipTests clean package
Then I launch per instructions:
./sbin/start-master.sh
It fails with two lines in the log file:
Failed to find Spark assembly in /etc/spark-1.2.1/assembly/target/scala-2.10
You need to build Spark before running this program.
Obviously, it's looking for a scala-2.10 build, but I did a scala-2.11 build. I tried the obvious -Dscala-2.11 flag, but that didn't change anything. The docs don't mention anything about how to run in standalone mode with scala 2.11.
Thanks in advance!
Before building you must run the script under:
dev/change-version-to-2.11.sh
Which should replace references to 2.10 with 2.11.
Note that this will not necessarily work as intended with non-GNU sed (e.g. OS X)
Related
If you follow the steps at the official Scala 3 sites, like Dotty or Scala Lang then it recommends using Coursier to install Scala 3. The problem is that neither or these explain how to run a compiled Scala 3 application after following the steps.
Scala 2:
> cs install scala
> scalac HelloScala2.scala
> scala HelloScala2
Hello, Scala 2!
Scala 3:
> cs install scala3-compiler
> scala3-compiler HelloScala3.scala
Now how do you run the compiled application with Scala 3?
Currently there does not seem to be a way to launch a runner for Scala 3 using coursier, see this issue. As a workaround, you can install the binaries from the github release page. Scroll all the way down passed the contribution list to see the .zip file and download and unpack it to some local folder. Then put the unpacked bin directory on your path. After a restart you will get the scala command (and scalac etc) in terminal.
Another workaround is using the java runner directly with a classpath from coursier by this command:
java -cp $(cs fetch -p org.scala-lang:scala3-library_3:3.0.0):. myMain
Replace myMain with the name of your #main def function. If it is in a package myPack you need to say myPack.myMain (as usual).
Finally, it seems that is possible to run scala application like scala 2 version using scala3 in Coursier:
cs install scala3
Then, you can compile it with scala3-compiler and run with scala3:
scala3-compiler Main.scala
scala3 Main.scala
This work-around seems to work for me:
cs launch scala3-repl:3+ -M dotty.tools.MainGenericRunner -- YourScala3File.scala
This way, you don't even have to compile the source code first.
In case your source depends on third-party libraries, you can specify the dependencies like this:
cs launch scala3-repl:3+ -M dotty.tools.MainGenericRunner -- -classpath \
$(cs fetch --classpath io.circe:circe-generic_3:0.14.1):. \
YourScala3File.scala
This would be an example where you use the circe library that's compiled with Scala 3. You should be able to specify multiple third-party libraries with the fetch sub-command.
So I am editing MovieLensALS.scala and I want to just recompile the examples jar with my modified MovieLensALS.scala.
I used build/mvn -pl :spark-examples_2.10 compile followed by build/mvn -pl :spark-examples_2.10 package which finish normally. I have SPARK_PREPEND_CLASSES=1 set.
But when I re-run MovieLensALS using bin/spark-submit --class org.apache.spark.examples.mllib.MovieLensALS examples/target/scala-2.10/spark-examples-1.4.0-hadoop2.4.0.jar --rank 5 --numIterations 20 --lambda 1.0 --kryo data/mllib/sample_movielens_data.txt I get java.lang.StackOverflowError even though all I added to MovieLensALS.scala is a println saying that this is the modified file, with no other modifications whatsoever.
My scala version is 2.11.8 and spark version is 1.4.0 and I am following the discussion on this thread to do what I am doing.
Help will be appreciated.
So I ended up figuring it out myself. I compiled using mvn compile -rf :spark-examples_2.10 followed by mvn package -rf :spark-examples_2.10 to generate the .jar file. Note that the jar file produced here is spark-examples-1.4.0-hadoop2.2.0.jar.
On the other hand, the stackoverflow error was because of a long lineage. For that I could either use checkpoints of reduce numiterations, I did the later. I followed this for it.
We are creating a customize version of Spark since we are changing some lines of code from ALS.scala. We build the customize spark version using
mvn command:
./make-distribution.sh --name custom-spark --tgz -Psparkr -Phadoop-2.6 -Phive -Phive-thriftserver -Pyarn
However, upon using the customized version of Spark, we run into this error:
Do you guys have some idea on what causes the error and how we might solve the issue?
I am actually using a jar file in the local machine by building them using sbt: sbt compile then sbt clean package and putting the jar file here: /Users/user/local/kernel/kernel-0.1.5-SNAPSHOT/lib.
However in the hadoop environment, the installation is different. Thus, I use maven to build spark and that's where the error comes in. I am thinking that this error might be dependent on using maven to build spark as there are some reports like this:
https://issues.apache.org/jira/browse/SPARK-2075
or probably on building spark assembly files
Please note that I am better dataminer than programmer.
I am trying to run examples from book "Advanced analytics with Spark" from author Sandy Ryza (these code examples can be downloaded from "https://github.com/sryza/aas"),
and I run into following problem.
When I open this project in Intelij Idea and try to run it, I get error "Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/rdd/RDD"
Does anyone know how to solve this issue ?
Does this mean i am using wrong version of spark ?
First when I tried to run this code, I got error "Exception in thread "main" java.lang.NoClassDefFoundError: scala/product", but I solved it by setting scala-lib to compile in maven.
I use Maven 3.3.9, Java 1.7.0_79 and scala 2.11.7 , spark 1.6.1. I tried both Intelij Idea 14 and 15 different versions of java (1.7), scala (2.10) and spark, but to no success.
I am also using windows 7.
My SPARK_HOME and Path variables are set, and i can execute spark-shell from command line.
The examples in this book will show a --master argument to sparkshell, but you will need to specify arguments as appropriate for your environment. If you don’t have Hadoop installed you need to start the spark-shell locally. To execute the sample you can simply pass paths to local file reference (file:///), rather than a HDFS reference (hdfs://)
The author suggest an hybrid development approach:
Keep the frontier of development in the REPL, and, as pieces of code
harden, move them over into a compiled library.
Hence the samples code are considered as compiled libraries rather than standalone application. You can make the compiled JAR available to spark-shell by passing it to the --jars property, while maven is used for compiling and managing dependencies.
In the book the author describes how the simplesparkproject can be executed:
use maven to compile and package the project
cd simplesparkproject/
mvn package
start the spark-shell with the jar dependencies
spark-shell --master local[2] --driver-memory 2g --jars ../simplesparkproject-0.0.1.jar ../README.md
Then you can access you object within the spark-shell as follows:
val myApp = com.cloudera.datascience.MyApp
However if you want to execute the sample code as Standalone application and execute it within idea you need to modify the pom.xml.
Some of dependencies are required for compilation, but are available in an spark runtime environment. Therefore these dependencies are marked with scope provided in the pom.xml.
<!--<scope>provided</scope>-->
you can remake the provided scope, than you will be able to run the samples within idea. But you can not provide this jar as dependency for the spark shell anymore.
Note: using maven 3.0.5 and Java 7+. I had problems with maven 3.3.X version with the plugin versions.
I've been trying to compile Apache spark with scala-2.11.1 (the latest version at the time). However, each time I try it ends up compiling everything to scala-2.10.*. I don't understand why.
The official documentation suggests that we use maven for compilation after switching to 2.11 using script in the dev/ folder.
What if I wanted to use sbt instead?
You need to enable scala-2.11 profile
>sbt -Dscala-2.11=true
sbt> compile