Master must start with yarn,spark - scala

I am getting this error when is want to run SparkPi example.
beyhan#beyhan:~/spark-1.2.0-bin-hadoop2.4$ /home/beyhan/spark-1.2.0-bin-hadoop2.4/bin/spark-submit --master ego-client --class org.apache.spark.examples.SparkPi /home/beyhan/spark-1.2.0-bin-hadoop2.4/lib/spark-examples-1.jar
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Error: Master must start with yarn, spark, mesos, or local
Run with --help for usage help or --verbose for debug output
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Also i already start my master via another terminal
>./sbin/start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /home/beyhan/spark-1.2.0-bin-hadoop2.4/sbin/../logs/spark-beyhan-org.apache.spark.deploy.master.Master-1-beyhan.out
Any suggestion ?
Thanks.

Download and extract Spark:
$ cd ~/Downloads
$ wget -c http://archive.apache.org/dist/spark/spark-1.2.0/spark-1.2.0-bin-hadoop2.4.tgz
$ cd /tmp
$ tar zxf ~/Downloads/spark-1.2.0-bin-hadoop2.4.tgz
$ cd spark-1.2.0-bin-hadoop2.4/
Start master:
$ sbin/start-master.sh
Find master's URL from logs in the file that above command printed. Lets assume that master is: spark://ego-server:7077
In this case, you can also find your master url by visiting this URL: http://localhost:8080/
Start one slave, and connect it to master:
$ sbin/start-slave.sh --master spark://ego-server:7077
Another way to ensure that master up and running start a shell bound to that master:
$ bin/spark-submit --master "spark://ego-server:7077"
If you get a spark shell, then everything seems fine.
Now execute your job:
$ find . -name "spark-example*jar"
./lib/spark-examples-1.2.0-hadoop2.4.0.jar
$ bin/spark-submit --master "spark://ego-server:7077" --class org.apache.spark.examples.SparkPi ./lib/spark-examples-1.2.0-hadoop2.4.0.jar

The error you're getting
Error: Master must start with yarn, spark, mesos, or local
Means that --master ego-client is not recognized by spark.
Use
--master local
for a local execution of spark or
--master spark://your-spark-master-ip:7077

Related

scala spark to read file from hdfs cluster

I am learning to develop spark applications using Scala. And I am in my very first steps.
I have my scala IDE on windows. configured and runs smoothly if reading files from local drive. However, I have access to a remote hdfs cluster and Hive database, and I want to develop, try, and test my applications against that Hadoop cluster... but I don't know how :(
If I try
val rdd=sc.textFile("hdfs://masternode:9000/user/hive/warehouse/dwh_db_jrtf.db/discipline")
I will get an error that contains:
Exception in thread "main" java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "MyLap/11.22.33.44"; destination host is: "masternode":9000;
Can anyone guide me please ?
You can use SBT to package your code in a .jar file. scp your file on your Node then try to submit it by doing a spark-submit.
spark-submit \
--class <main-class> \
--master <master-url> \
--deploy-mode <deploy-mode> \
--conf <key>=<value> \
... # other options
<application-jar> \
[application-arguments]
You can't access to your Cluster from your Windows Machine in that way.

How to pass external configuration file to pyspark(Spark 2.x) program?

When I am running pyspark program interactive shell able to fetch the configuration file(config.ini) inside pyspark script,
But when I am trying to run same script using Spark submit command with master yarn and cluster deployment mode is cluster it giving me error as config file not exists, I have checked yarn log and able to see same, below is command for running the pyspark job.
spark2-submit --master yarn --deploy-mode cluster test.py /home/sys_user/ask/conf/config.ini
With spark2-sumbmit command there is parameter provided properties-file, you can use that to get this properties file available in spark-submit command.
e.g. spark2-submit --master yarn --deploy-mode cluster --properties-file $CONF_FILE_NAME pyspark_script.py
Pass the ini file in spark.files parameter
.config('spark.files', 'config/local/config.ini') \
Read in pyspark:
with open(SparkFiles.get('config.ini')) as config_file:
print(config_file.read())
It works for me.

Pushing my application.config file to my spark job worker nodes

My spark job is failing and it looks like the reason is that my configuration file is not found on the worker node.
My config file is currently in:
/src/main/resources/application.conf
I copied the file to the root folder where I run the spark-submit command and I did this:
spark-submit --class "com.path.to.main.MainClass" --master local[*] --files application.conf /path/to/jar.jar
That didn't seem to work either as I got the same error.
What am I doing wrong?

Running Applications doesn t appear spark web Ui but runs

i need your help, i created 2 apps (one which using spray framework and the other one receive messages from kafka and send it to cassandra).
Both run all the time and should never stop.
I m in standalone on the server and my conf is :
- In spark_env.sh :
SPARK_MASTER_IP=MYIP
SPARK_EXECUTOR_CORES=2
SPARK_MASTER_PORT=7077
SPARK_EXECUTOR_MEMORY=4g
#SPARK_WORKER_PORT=65000
MASTER=spark://${SPARK_MASTER_IP}:${SPARK_MASTER_PORT}
SPARK_LOCAL_IP=MYIP
SPARK_MASTER_WEBUI_PORT=8080
- In spark_env.sh :
spark.master spark://MYIPMASTER:7077
spark.eventLog.enabled true
spark.eventLog.dir /opt/spark-1.6.1-bin-hadoop2.6/spark-events
spark.history.fs.logDirectory /opt/spark-1.6.1-bin-hadoop2.6/logs
spark.io.compression.codec lzf
spark.cassandra.connection.host MYIPMASTER
spark.cassandra.auth.username LOGIN
spark.cassandra.auth.password PASSWORD
I can access on both pages :
MYIP:8080/ and MYIP:4040/
But on http://MYIP:8080/, i see only my workers , i can t see my application which running.
When i submit i use this :
/opt/spark-1.6.1-bin-hadoop2.6/bin/spark-submit --class MYCLASS --verbose --conf spark.eventLog.enable=true --conf spark.master.ui.port=8080 --master local[2] /opt/spark-1.6.1-bin-hadoop2.6/jars/MYJAR.jar
Why ?
Could you help me?
Thanks a lot :)
In your spark-submit command you are using the --master as local[2] which is submitting the application in local mode. If you wants to run it on the standalone cluster that you are running then you should pass spark master URL in master option i.e. --master spark://MYIPMASTER:7077
In terms of the master, spark-submit will respect the setting by following orders,
The master URL in your application code, which is the
SparkSession.builder().master("...")
The --master parameter for the spark-submit command
The default configuration in your spark-defaults.conf
Mode: Standalone cluster
1> bin/spark-submit --class com.deepak.spark.App ../spark-0.0.2-SNAPSHOT.jar --master spark://172.29.44.63:7077, was not working because master was specified after the jar
2> bin/spark-submit --class com.deepak.spark.App --master spark://172.29.44.63:7077 ../spark-0.0.2-SNAPSHOT.jar, this worked

Exception when running Spark job server in spark standalone mode

I'm trying out the Spark job server - specifically, the docker container option. I was able to run the WordCountExample app in spark local mode. However, I ran into an exception when I tried to point the app to a remote Spark master.
Following are the commands I used to run the WordCountExample app:
1. sudo docker run -d -p 8090:8090 -e SPARK_MASTER=spark://10.501.502.503:7077 velvia/spark-jobserver:0.6.0
2. sbt job-server-tests/package
3. curl --data-binary #job-server-tests/target/scala-2.10/job-server-tests_2.10-0.6.2-SNAPSHOT.jar localhost:8090/jars/test
4. curl -d "input.string = a b c a b see" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample'
Following is the exception I hit when I ran step 4 above:
{
"status": "ERROR",
"result": {
"message": "Futures timed out after [15 seconds]",
"errorClass": "java.util.concurrent.TimeoutException",
"stack": ["scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)", "scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)", "scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)", "akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169)", "scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)", "akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167)", "akka.dispatch.BatchingExecutor$Batch.blockOn(BatchingExecutor.scala:101)", "scala.concurrent.Await$.result(package.scala:107)", ...
I started the remote Spark cluster (master and workers) using
cd $SPARK_HOME
./sbin/start-all.sh
The remote cluster uses Spark version 1.5.1 (ie, the prebuilt binary spark-1.5.1-bin-hadoop2.6)
Questions
Any suggestions on how I could debug this?
Are there any logs I could look into to figure out the root cause?
Thanks in advance.
This could be a network issue. SJS server should be reachable from Spark cluster.
I had same problem with spark 1.6.1. I changed jobserver version to last (0.6.2.mesos-0.28.1.spark-1.6.1) and it works for me.