How to use TypeSafe config with Apache Spark? - scala

I have a Spark application which I am trying to package as a fat jar and deploy to the local cluster with spark-submit. I am using Typesafe config to create config files for various deployment environments - local.conf, staging.conf, and production.conf - and trying to submit my jar.
The command I am running is the following:
/opt/spark-3.0.1-bin-hadoop2.7/bin/spark-submit \
--master spark://127.0.0.1:7077 \
--files ../files/local.conf \
--driver-java-options '-Dconfig.file=local.conf' \
target/scala-2.12/spark-starter-2.jar
I built the command incrementally by adding options one after another. With --files, logs suggest that the file is being uploaded to Spark but when I add --driver-java-options, submitting fails with file not being found.
Caused by: java.io.FileNotFoundException: local.conf (No such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:219)
at java.base/java.io.FileInputStream.<init>(FileInputStream.java:157)
at com.typesafe.config.impl.Parseable$ParseableFile.reader(Parseable.java:629)
at com.typesafe.config.impl.Parseable.reader(Parseable.java:99)
at com.typesafe.config.impl.Parseable.rawParseValue(Parseable.java:233)
at com.typesafe.config.impl.Parseable.parseValue(Parseable.java:180)
... 35 more
Code:
import com.example.spark.settings.Settings
import com.typesafe.config.ConfigFactory
import org.apache.spark.sql.SparkSession
object App extends App {
val config = ConfigFactory.load()
val settings = Settings(config = config)
val spark = SparkSession
.builder()
.getOrCreate()
spark.stop()
}
What do I need to change so that I can provide config files separately?

According to Spark Docs, --files are placed in the working directory of each executor. While you're trying to access this file from driver, not executor.
In order to load config on driver side, try something like this:
/opt/spark-3.0.1-bin-hadoop2.7/bin/spark-submit \
--master spark://127.0.0.1:7077 \
--driver-java-options '-Dconfig.file=../files/local.conf' \
target/scala-2.12/spark-starter-2.jar
If what you want is to load config on executor side, you need to use spark.executor.extraJavaOptions property. In this case you need to load the config inside lambda that runs on executor, example for RDD API:
myRdd.map { row =>
val config = ConfigFactory.load()
...
}
Visibility of the config will be limited to the scope of the lambda. This is a quite complicated way, and I'll describe a better option below.
My general recommendation on how to work with custom configs in Spark:
Read this chapter of Spark Docs
Load the config on driver side
Map settings that you need to immutable case class
Pass this case class to executors via closures
Keep in mind that case class with settings should contain as less data as possible, any field types should be either primitive or implement java.io.Serializable
EMR specific is that it's hard to get to the driver's filesystem. So it's preferable to store the config in the external storage, typically S3.
Typesafe config lib is not capable to load files directly from S3, so you can pass a path to the config as an app argument, not as -Dproperty, read it from S3 using AmazonS3Client and then load it as config using ConfigFactory.parseString(). See this answer as an example.

Related

Spark - A master URL must be set in your configuration” when trying to run an app

I know this question has a duplicate
, but my use case is a little specific. I want to run my Spark job (compiled to a .jar) on an EMR (via Spark submit) and give 2 options like this:
spark-submit --master yarn --deploy-mode cluster <rest of command>
To achieve this, I wrote the code like this:
val sc = new SparkContext(new SparkConf())
val spark = SparkSession.builder.config(sc.getConf).getOrCreate()
However this gives the error during building the jar:
org.apache.spark.SparkException: A master URL must be set in your configuration
So what's a workaround? How do I set these 2 variables in code so that the master and deploy mode options are taken up while submitting; yet I should be able to use the variables sc and spark in my code (e.g:- val x = spark.read())
You could simply access command-line arguments simply as below and pass as many values as you want.
val spark = SparkSession.builder().appName("Test App")
.master(args(0))
.getOrCreate()
spark-submit --master yarn --deploy-mode cluster master-url
If you need something more fancy command-line parser then you can take a look here https://github.com/scopt/scopt

Workflow and Scheduling Framework for Spark with Scala in Maven Done with Intellij IDEA

I have created a spark project with Scala. Its a maven project with all dependency configured in POM.
Spark i am using as ETL. Source is file generated by API, All kind of transformation in spark then load it to cassandra.
Is there any Workflow software, which can used the jar to automate the process with email triggering, success or failure job flow.
May someone please help me..... is Airflow can be used for this purpose, i have used SCALA and NOT Python
Kindly share your thoughts.
There is no built-in mechanism in Spark that will help. A cron job seems reasonable for your case. If you find yourself continuously adding dependencies to the scheduled job, try Azkaban
one such example of shell script is :-
#!/bin/bash
cd /locm/spark_jobs
export SPARK_HOME=/usr/hdp/2.2.0.0-2041/spark
export HADOOP_CONF_DIR=/etc/hadoop/conf
export HADOOP_USER_NAME=hdfs
export HADOOP_GROUP=hdfs
#export SPARK_CLASSPATH=$SPARK_CLASSPATH:/locm/spark_jobs/configs/*
CLASS=$1
MASTER=$2
ARGS=$3
CLASS_ARGS=$4
echo "Running $CLASS With Master: $MASTER With Args: $ARGS And Class Args: $CLASS_ARGS"
$SPARK_HOME/bin/spark-submit --class $CLASS --master $MASTER --num-executors 4 --executor-cores 4 "application jar file"
You can even try using spark-launcher which can be used to start a spark application programmatically :-
First create a sample spark application and build a jar file for it.
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
object SparkApp extends App{
val conf=new SparkConf().setMaster("local[*]").setAppName("spark-app")
val sc=new SparkContext(conf)
val rdd=sc.parallelize(Array(2,3,2,1))
rdd.saveAsTextFile("result")
sc.stop()
}
This is our simple spark application, make a jar of this application using sbt assembly, now we make a scala application through which we start this spark application as follows:
import org.apache.spark.launcher.SparkLauncher
object Launcher extends App {
val spark = new SparkLauncher()
.setSparkHome("/home/knoldus/spark-1.4.0-bin-hadoop2.6")
.setAppResource("/home/knoldus/spark_launcher-assembly-1.0.jar")
.setMainClass("SparkApp")
.setMaster("local[*]")
.launch();
spark.waitFor();
}
In the above code we use SparkLauncher object and set values for its like
setSparkHome(“/home/knoldus/spark-1.4.0-bin-hadoop2.6”) is use to set spark home which is use internally to call spark submit.
.setAppResource(“/home/knoldus/spark_launcher-assembly-1.0.jar”) is use to specify jar of our spark application.
.setMainClass(“SparkApp”) the entry point of the spark program i.e driver program.
.setMaster(“local[*]”) set the address of master where its start here now we run it on loacal machine.
.launch() is simply start our spark application.
Its a minimal requirement you can also set many other configurations like pass arguments, add jar , set configurations etc.

Set spark.driver.memory for Spark running inside a web application

I have a REST API in Scala Spray that triggers Spark jobs like the following:
path("vectorize") {
get {
parameter('apiKey.as[String]) { (apiKey) =>
if (apiKey == API_KEY) {
MoviesVectorizer.calculate() // Spark Job run in a Thread (returns Future)
complete("Ok")
} else {
complete("Wrong API KEY")
}
}
}
}
I'm trying to find the way to specify Spark driver memory for the jobs. As I found, configuring driver.memory from within the application code doesn't effect anything.
The whole web application along with the Spark is packaged in a fat Jar.
I run it by running
java -jar app.jar
Thus, as I understand, spark-submit is not relevant here (or is it?). So, I can not specify --driver-memory option when running the app.
Is there any way to set the driver memory for Spark within the web app?
Here's my current Spark configuration:
val spark: SparkSession = SparkSession.builder()
.appName("Recommender")
.master("local[*]")
.config("spark.mongodb.input.uri", uri)
.config("spark.mongodb.output.uri", uri)
.config("spark.mongodb.keep_alive_ms", "100000")
.getOrCreate()
spark.conf.set("spark.executor.memory", "10g")
val sc = spark.sparkContext
sc.setCheckpointDir("/tmp/checkpoint/")
val sqlContext = spark.sqlContext
As it is said in the documentation, Spark UI Environment tab shows only variables that are effected by the configuration. Everything I set is there - apart from spark.executor.memory.
This happens because you use local mode. In local mode there is no real executor - all Spark components run in a single JVM, with single heap configuration, so executor specific configuration doesn't matter.
spark.executor options are applicable only when applications is submitted to a cluster.
Also, Spark supports only a single application per JVM instance. This means that all core Spark properties, will be applied only when SparkContext is initialized, and persist as long as context (not SparkSession) is kept alive. Since SparkSession initializes SparkContext, no additional "core" settings will can applied after getOrCreate.
This means that all "core" options should be provided using config method of the SparkSession.builder.
If you're looking for alternatives to embedding you check an exemplary answer to Best Practice to launch Spark Applications via Web Application? by T. Gawęda.
Note: Officially Spark doesn't support applications running outside spark-submit and there are some elusive bugs related to that.

Best Practice for properties in ScalaSpark

I'm starting a project using Hadoop Spark. I'll be developing in Scala.
I'm creating the project from scratch and I was wondering what to do with properties.
I come from a Java Background where I use .properties file and load them at the start. Then I have a class used to access the different value of my properties.
Is this also a good practice in Scala ?
Tried googling, but there isn't anything relating to this.
You can read the properties file in scala similar to Java
import scala.io.Source.fromUrl
val reader = fromURL(getClass.getResource("conf/fp.properties")).bufferedReader()
You can read more about I/O package at Scala Standard Library I/O
If you are looking to provide spark properties then that have different way of doing it e.g. providing them at time when you submit spark job.
Hoping this helps.
Here we do:
scopt.OptionParser to parse command line arguments.
key/value arguments conf are replicated to System.properties
command line arg config-file is used to read config file (using spark context to be able to read from S3/HDFS with custom code path to be able to read from jar resources)
config file parsed using com.typesafe.config.ConfigFactory.
Default configs from resources and from read file are combined using the withFallback mechanism. The order is important since we want typesafe to use values from (2) to override thoses from the files.
There are three ways to determine properties for Spark:
Spark Propertis in SparkConf original spec:
Spark properties control most application settings and are configured
separately for each application. These properties can be set directly
on a SparkConf passed to your SparkContext.
Dynamically Loading Spark Properties original spec, it avoids hard-coding certain configurations in a SparkConf:
./bin/spark-submit --name "My app" --master local[*] --conf spark.eventLog.enabled=false
--conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar
Overriding spark-defaults.conf — Default Spark Properties File - original spec
I described properties by priority - SparkConf has the highest priority and spark-conf has the lowest priority. For more details check this post
If you want to store all properties in the single place, just you Typesafe Config. Typesafe Config gets rid of using input streams for reading file, it's widely used the approach in scala app.

How to stop Spark from loading defaults?

When I do a spark-submit, the defaults conf set up in the SPARK_HOME directory is found and loaded into the System properties.
I want to stop the defaults conf from being loaded, and just get the command line arguments, so that I may re-order how spark is configured before creating my spark context.
Is this possible?
There are a couple ways to modify configurations.
According to the spark docs, you can modify configs at runtime with flags (http://spark.apache.org/docs/latest/configuration.html):
The Spark shell and spark-submit tool support two ways to load
configurations dynamically. The first are command line options, such
as --master, as shown above. spark-submit can accept any Spark
property using the --conf flag... Any values specified as flags or in the properties file will be passed on to the application and merged with those specified through SparkConf.
which means you can kick off your jobs like this:
./bin/spark-submit --conf spark.eventLog.enabled=false --conf "spark.executor.extraJavaOptions=-XX:+PrintGCDetails -XX:+PrintGCTimeStamps" myApp.jar
OR, you can go edit the spark-defaults.conf and not have to pass additional flags in your spark-submit command.
Here's a solution I found acceptable for my issue:
Create a blank "blank.conf" file, and supply it to spark using --properties
${SPARK_HOME}/bin/spark-submit --master local --properties-file "blank.conf" # etc
Spark will use the conf in its configuration instead of finding the defaults conf. You can then manually load up the defaults conf later, before creating your SparkContext, if that's your desire.