Kubernetes spark-submit - scala

I am trying to use kuberenets as cluster manger for spark. I also want to ship the container logs to splunk. Now I do have monitoring stack deployed (fluent-bit, prometheus etc)in the same namespace and the way it works is if your pod has a certain environment_variable it will start reading the logs and push it to splunk.
The thing I am not able to find is how do I set a environment variable and populate it
bin/spark-submit \
--deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
--master k8s://https://my-kube-cluster.com \
--conf spark.executor.instances=2 \
--conf spark.app.name=spark-pi \
....
....
....
--conf spark.kubernetes.driverEnv.UID="set it to spark driver pod id" \
local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar

To configure additional Spark Driver Pod environment variables you can pass additional --conf spark.kubernetes.driverEnv.EnvironmentVariableName=EnvironmentVariableValue (please refer docs for more details).
To configure additional Spark Executor Pods environment variables you can pass additional --conf spark.executorEnv.EnvironmentVariableName=EnvironmentVariableValue (please refer docs for more details).
Hope it helps.

Related

Spark submit truncates arguments in yarn cluster mode

I am running spark application on yarn cluster in cluster deploy mode using following command
spark-submit --conf spark.executor.memory=24g --conf spark.master=yarn --conf spark.submit.deployMode=cluster --conf spark.executor.extraJavaOptions=-Dfile.encoding=UTF-8 --conf spark.files=file:///opt/configurations/app.conf --class com.example.HelloWorld --queue sample_q file:///opt/jars/example.jar '{"sample":{}}'
This command is not passing the entire argument to HelloWorld class.
Main method argument passed : {"sample":{
Main method argument expected: {"sample":{}}
The same command is running properly with client deploy mode
spark-submit --conf spark.executor.memory=24g --conf spark.master=yarn --conf spark.submit.deployMode=client --conf spark.executor.extraJavaOptions=-Dfile.encoding=UTF-8 --conf spark.files=file:///opt/configurations/app.conf --class com.example.HelloWorld --queue sample_q file:///opt/jars/example.jar '{"sample":{}}'
Upon inspecting the launch_container.sh script in yarn worker node it was found that the command also had truncated string within it (--arg '{\"sample\":{')
Spark Version: 2.3
Hadoop Version: 2.7.3
Yarn consider {{ and }} as parameter expansion character hence any occurrence is considered as an environment variable and replaced with the corresponding value. Since there is no environment variable.
This causes an issue in cluster deploy mode as driver runs in yarn cluster.
Reference: YarnApplicationConstants

Pyspark submit master yarn cluster deploy - logs location

I submitted pyspark job with spark-submit command on a haddoop cluster. The command is as follows
spark-submit --master yarn --deploy-mode cluster --driver-memory 1g --num-executors 2 --executor-memory 1g --executor-cores 2 --py-files module_stm_extracts.py,module_table_compare.py datacheck,py
The job completed, but I never got the application id in the console.
How do I find the application log, so that I can review
You can find it at the YARN Resource Manager WebUI, by default it's acessible through the 8088 port of the master node: http://<master_node_ip>:8088
Or you can list the applications through command line too:
yarn application -list -appStates ALL
And with the applicationId get the log with the following command:
yarn logs --applicationId <application_id>

How can I run uncompiled Spark Scala/spark-shell code as a Dataproc job?

Normally, if I'm using Scala for Spark jobs I'll compile a jarfile and submit it with gcloud dataproc jobs submit spark, but sometimes for very lightweight jobs I might be using uncompiled Scala code in a notebook or using the spark-shell REPL, where I assume a SparkContext is already available.
For some of these lightweight use cases I can equivalently use PySpark and submit with gcloud dataproc jobs submit pyspark but sometimes I need easier access to Scala/Java libraries such as directly creating a org.apache.hadoop.fs.FileSystem object inside of map functions. Is there any easy way to submit such "spark-shell" equivalent jobs directly from a command-line using Dataproc Jobs APIs?
At the moment, there isn't a specialized top-level Dataproc Job type for uncompiled Spark Scala, but under the hood, spark-shell is just using the same mechanisms as spark-submit to run a specialized REPL driver: org.apache.spark.repl.Main. Thus, combining this with the --files flag available in gcloud dataproc jobs submit spark, you can just write snippets of Scala that you may have tested in a spark-shell or notebook session, and run that as your entire Dataproc job, assuming job.scala is a local file on your machine:
gcloud dataproc jobs submit spark --cluster ${CLUSTER} \
--class org.apache.spark.repl.Main \
--files job.scala \
-- -i job.scala
Just like any other file, you can also specify any Hadoop-compatible path in the --files argument as well, such as gs:// or even hdfs://, assuming you've already placed your job.scala file there:
gcloud dataproc jobs submit spark --cluster ${CLUSTER} \
--class org.apache.spark.repl.Main \
--files gs://${BUCKET}/job.scala \
-- -i job.scala
gcloud dataproc jobs submit spark --cluster ${CLUSTER} \
--class org.apache.spark.repl.Main \
--files hdfs:///tmp/job.scala \
-- -i job.scala
If you've staged your job file onto the Dataproc master node via an init action, you'd use file:/// to specify that the file is found on the cluster's local filesystem instead of your local filesystem where you're running gcloud:
gcloud dataproc jobs submit spark --cluster ${CLUSTER} \
--class org.apache.spark.repl.Main \
--files file:///tmp/job.scala \
-- -i job.scala
Note in all cases, the file becomes a local file in the working-directory of the main driver job, so the argument to "-i" can just be a relative path to the filename.

Setting spark.app.name for PySpark kernel with Jupyter Notebook

I am running a Jupyter Notebook server with PySpark (as explained here) on a Hadoop cluster with YARN. I noticed that each Spark application launched via a new notebook, appears in the Spark Web UI as an application named "PySparkShell" (which corresponds to the "spark.app.name" configuration).
My problem is that I sometimes have many notebooks running in Jupyter, but all of them appear in Spark's Web UI with the same generic name of "PySparkShell". I know I can change the default name to something else, and I also know that I cannot change the app name once a SparkContext has been created. My question is: Can I make so that each application will be given a different name when the kernel starts? (preferably something that will help me connect the notebook name, i.e. 'Untitled.ipynb', to its Spark application name or ID)
UPDATE: added a code snippet of my run command for the notebook
export DAEMON_PORT=8880
ANACONDA_PATH=/opt/cloudera/parcels/Anaconda/bin
export PATH=$ANACONDA_PATH:$PATH
export PYSPARK_DRIVER_PYTHON=$ANACONDA_PATH/jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=$DAEMON_PORT"
pyspark2 \
--executor-memory 5g \
--executor-cores 4 \
--driver-memory 20g \
--conf spark.shuffle.service.enabled=true \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.minExecutors=0 \
--conf spark.dynamicAllocation.maxExecutors=40
In the first few lines where you specify you SparkContext() you can include a config object. You can use the config object to set various settings but chaining a set('property_name', 'property_value')
I'll demonstrate by setting the executor memory
from pyspark import SparkConf, SparkContext
conf = SparkConf().setAppName('Your_Project_name').set("spark.executor.memory", "5g")
sc = SparkContext(conf)

Pass packages to pyspark running on dataproc from airflow?

We have an Airflow DAG that involves running a pyspark job on Dataproc. We need a jdbc driver during the job, which I'd normally pass to the dataproc submit command:
gcloud dataproc jobs submit pyspark \
--cluster my-cluster \
--properties spark.jars.packages=mysql:mysql-connector-java:6.0.6 \
--py-files ...
But how can I do it with Airflow's DataProcPySparkOperator?
For now we're adding this library to the cluster itself:
gcloud dataproc clusters create my-cluster \
--region global \
--zone europe-west1-d \
...
--properties spark:spark.jars.packages=mysql:mysql-connector-java:6.0.6 \
...
This seems to be working fine, but it doesn't feel like the right way to do it. Is there another way?
I believe you want to pass dataproc_pyspark_properties to the DataProcPySparkOperator.
See:
https://github.com/apache/incubator-airflow/blob/master/airflow/contrib/operators/dataproc_operator.py