Is there a way to submit a Spark job on HDInsight via Powershell?
I know it can be done via activity in Azure Data Factory, but is there a way to submit python script to pyspark HDInsight from Powershell cmdlet?
Based on my knowledge, there is no Azure PowerShell command could do this.
You could use Apache Spark REST API, which is used to submit remote jobs to an Azure HDInsight Spark cluster. Please refer to this feedback.
HDInsight allows remote job submission through the REST API using
Livy. It is part of the recent Spark release on Linux.
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-apache-spark-livy-rest-interface/
Related
I'm trying to submit Spark jobs to an Azure HDInsight cluster using a Jupyter Notebook with the PySpark kernel. I know that in order to do this I must authenticate to Livy, and the examples I've seen involved storing a config on the machine that runs Jupyter, and pointing PySpark to that config.
But is there a way to do this that doesn't involved storing the password in plaintext on the machine? Say if every request just needs the auth credentials, can I pull them from Azure KeyVault as they're needed and include with the PySpark requests?
We execute Databricks notebook pipelines using Azure data factory. We have configured 'Log delivery' option to get logs on DBFS. Currently, when two pipeline runs simelteneouly we are not clearly able to segregate logs per pipeline. It is possible using spark, when the instance is readily available in the databricks, to point logs directory to be ex /var/spark/{random_id}/logs/ ?
We have a requirement that while provisioning the Databricks service thru CI/CD pipeline in Azure DevOps we should able to mount a blob storage to DBFS without connecting to a cluster. Is it possible to mount object storage to DBFS cluster by using a bash script from Azure DevOps ?
I looked thru various forums but they all mention about doing this using dbutils.fs.mount but the problem is we cannot run this command in Azure DevOps CI/CD pipeline.
Will appreciate any help on this.
Thanks
What you're asking is possible but it requires a bit of extra work. In our organisation we've tried various approaches and I've been working with Databricks for a while. The solution that works best for us is to write a bash script that makes use of the databricks-cli in your Azure Devops pipeline. The approach we have is as follows:
Retrieve a Databricks token using the token API
Configure the Databricks CLI in the CI/CD pipeline
Use Databricks CLI to upload a mount script
Create a Databricks job using the Jobs API and set the mount script as file to execute
The steps above are all contained in a bash script that is part of our Azure Devops pipeline.
Setting up the CLI
Setting up the Databricks CLI without any manual steps is now possible since you can generate a temporary access token using the Token API. We use a Service Principal for authentication.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/tokens
Create a mount script
We have a scala script that follows the mount instructions. This can be Python as well. See the following link for more information:
https://docs.databricks.com/data/data-sources/azure/azure-datalake-gen2.html#mount-azure-data-lake-storage-gen2-filesystem.
Upload the mount script
In the Azure Devops pipeline the databricks-cli is configured by creating a temporary token using the token API. Once this step is done, we're free to use the CLI to upload our mount script to DBFS or import it as a notebook using the Workspace API.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/workspace#--import
Configure the job that actually mounts your storage
We have a JSON file that defines the job that executes the "mount storage" script. You can define a job to use the script/notebook that you've uploaded in the previous step. You can easily define a job using JSON, check out how it's done in the Jobs API documentation:
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/jobs#--
At this point, triggering the job should create a temporary cluster that mounts the storage for you. You should not need to use the web interface, or perform any manual steps.
You can apply this approach to different environments and resource groups, as do we. For this we make use of Jinja templating to fill out variables that are environment or project specific.
I hope this helps you out. Let me know if you have any questions!
Please share your experiences wrt orchestrating jobs run through various tools and programmatic interfaces to load data to Snowflake-
python scripts in Ec2 instances. currently scheduled using crontab.
tasks in snowflake
Alteryx workflows
Are there any tools with sophisticated UI to create job workflows with dependencies?
The workflow can have -
python script followed by a task
Alteryx workflow followed by a python script and then a task
If any job fails then it should send emails to the team.
Thanks
We have used both CONTROL-M and Apache Airflow to schedule and orchestrate data load to snowflake
I have a spark job, which I normally run with spark-submit with the input file name as the argument. Now I want to make the job available for the team, so people can submit an input file (probably through some web-API), then the spark job will be trigger, and it will return user the result file (probably also through web-API). (I am using Java/Scala)
What do I need to build in order to trigger the spark job in such scenario? Is there some tutorial somewhere? Should I use spark-streaming for such case? Thanks!
One way to go is have a web server listening for jobs, and each web request potentially triggering an execution of a spark-submit.
You can execute this using Java's ProcessBuilder.
To the best of my knowledge, there is no good way of invoking spark jobs other than through spark-submit.
You can use Livy.
Livy is an open source REST interface for using Spark from anywhere.
Livy is a new open source Spark REST Server for submitting and interacting with your Spark jobs from anywhere. Livy is conceptually based on the incredibly popular IPython/Jupyter, but implemented to better integrate into the Hadoop ecosystem with multi users. Spark can now be offered as a service to anyone in a simple way: Spark shells in Python or Scala can be ran by Livy in the cluster while the end user is manipulating them at his own convenience through a REST api. Regular non-interactive applications can also be submitted. The output of the jobs can be introspected and returned in a tabular format, which makes it visualizable in charts. Livy can point to a unique Spark cluster and create several contexts by users. With YARN impersonation, jobs will be executed with the actual permissions of the users submitting them.
Please check this url for info.
https://github.com/cloudera/livy
You can use SparkLauncher class to do this. You will need to have a REST API that will take file from the user and after that trigger the spark job using SparkLauncher.
Process spark = new SparkLauncher()
.setAppResource(job.getJarPath())
.setMainClass(job.getMainClass())
.setMaster("master spark://"+this.serverHost + ":" + this.port)
.launch();