pass azure synapes notebook name as input name in Azure Data factory - azure-data-factory

I need to pass azure synapse notebook name as input (dynamically) in Azure data factory.
Please assist.

The above approach will work for you in synapse and in Azure Data factory as well.
This my repro for your reference.
In Synapse pipeline:
My Synapse Notebook:
Set variable for notebook name:
Notebook activity:
Synapse spark Notebook executed after pipeline execution:
In ADF pipeline:
In ADF pipeline, you have to create a linked service for Synapse spark notebook. only after that you can access the synapse spark pools and synapse notebooks.
Here also, it supports giving dynamic content and we can give the same as #variables("variable_name") for the name of the Notebook as above.
NOTE: Please make sure you publish your Synapse spark notebook in order to reflect your recent changes. While calling Synapse spark notebook in ADF, make sure you follow these Prerequisites from Microsoft documentation.

Related

Custom spark log location configuration in Azure databricks

We execute Databricks notebook pipelines using Azure data factory. We have configured 'Log delivery' option to get logs on DBFS. Currently, when two pipeline runs simelteneouly we are not clearly able to segregate logs per pipeline. It is possible using spark, when the instance is readily available in the databricks, to point logs directory to be ex /var/spark/{random_id}/logs/ ?

Mounting Azure Blob Storage to Azure Databricks without using cluster

We have a requirement that while provisioning the Databricks service thru CI/CD pipeline in Azure DevOps we should able to mount a blob storage to DBFS without connecting to a cluster. Is it possible to mount object storage to DBFS cluster by using a bash script from Azure DevOps ?
I looked thru various forums but they all mention about doing this using dbutils.fs.mount but the problem is we cannot run this command in Azure DevOps CI/CD pipeline.
Will appreciate any help on this.
Thanks
What you're asking is possible but it requires a bit of extra work. In our organisation we've tried various approaches and I've been working with Databricks for a while. The solution that works best for us is to write a bash script that makes use of the databricks-cli in your Azure Devops pipeline. The approach we have is as follows:
Retrieve a Databricks token using the token API
Configure the Databricks CLI in the CI/CD pipeline
Use Databricks CLI to upload a mount script
Create a Databricks job using the Jobs API and set the mount script as file to execute
The steps above are all contained in a bash script that is part of our Azure Devops pipeline.
Setting up the CLI
Setting up the Databricks CLI without any manual steps is now possible since you can generate a temporary access token using the Token API. We use a Service Principal for authentication.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/tokens
Create a mount script
We have a scala script that follows the mount instructions. This can be Python as well. See the following link for more information:
https://docs.databricks.com/data/data-sources/azure/azure-datalake-gen2.html#mount-azure-data-lake-storage-gen2-filesystem.
Upload the mount script
In the Azure Devops pipeline the databricks-cli is configured by creating a temporary token using the token API. Once this step is done, we're free to use the CLI to upload our mount script to DBFS or import it as a notebook using the Workspace API.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/workspace#--import
Configure the job that actually mounts your storage
We have a JSON file that defines the job that executes the "mount storage" script. You can define a job to use the script/notebook that you've uploaded in the previous step. You can easily define a job using JSON, check out how it's done in the Jobs API documentation:
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/jobs#--
At this point, triggering the job should create a temporary cluster that mounts the storage for you. You should not need to use the web interface, or perform any manual steps.
You can apply this approach to different environments and resource groups, as do we. For this we make use of Jinja templating to fill out variables that are environment or project specific.
I hope this helps you out. Let me know if you have any questions!

Azure Databrics - Running a Spark Jar from Gen2 DataLake Storage

I am trying to run a spark-submit from Azure Databrics. Currently I can create a job, with the jar uploaded within the Databrics workspace, and run it.
My queries are:
Is there a way to access a jar residing on a GEN2 DataLake storage and do a spark-submit from Databrics workspace, or even from Azure ADF ? (Because the communication between the workspace and GEN2 storage is protected "fs.azure.account.key")
Is there a way to do a spark-submit from a databrics notebook?
Is there a way to access a jar residing on a GEN2 DataLake storage and
do a spark-submit from Databrics workspace, or even from Azure ADF ?
(Because the communication between the workspace and GEN2 storage is
protected "fs.azure.account.key") Unfortunately, you cannot access a
jar residing on Azure Storage such as ADLS Gen2/Gen1 account.
Note: The --jars, --py-files, --files arguments support DBFS and S3 paths.
Typically, the Jar libraries are stored under dbfs:/FileStore/jars.
You need to upload libraries in dbfs and pass as the parameters in the jar activity.
For more details, refer "Transform data by running a jar activity in Azure Databricks using ADF".
Is there a way to do a spark-submit from a databricks notebook?
To answer the second question, you may refer the below Job types:
Reference: SparkSubmit and "Create a job"
Hope this helps.
If this answers your query, do click “Mark as Answer” and "Up-Vote" for the same. And, if you have any further query do let us know.
Finally I figured out how to run this:
You can do a run a Databricks jar from an ADF, and attach it to an existing cluster, which will have the adls key configured in the cluster.
It is not possible to do a spark-submit from a notebook. But you can create a spark job in jobs, or you can use the Databricks Run Sumbit api, to do a spark-submit.

Azure Data Factory using existing cluster in Databricks

I have created a pipeline in Azure Data Factory. I created a Databricks workspace, notebook (with some code), and a cluster. I created the connection from ADF to DB. I tested the connection. All lights are green. I published the ADF pipeline.
When I trigger the job, it says SUCCESS. But nothing happens in Databricks. No job is created in DB. The code in the notebook cell is apparently not executed. (I know this because the code prints the current time.)
Has anyone done this successfully?
To be clear, I want Data Factory to use an existing cluster in Databricks, not create a new one. I have named the cluster in the pipeline setup params.
Please reference this tutorial: Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory.
In this tutorial, you use the Azure portal to create an Azure Data Factory pipeline that executes a Databricks notebook against the Databricks jobs cluster. It also passes Azure Data Factory parameters to the Databricks notebook during execution.
You perform the following steps in this tutorial:
Create a data factory.
Create a pipeline that uses Databricks Notebook Activity.
Trigger a pipeline run.
Monitor the pipeline run.
One of the difference is you don't need to create new job cluster, select use an existing cluster.
Hope this helps.
Solved. The problem was that the notebook (containing my code) was within my User notebook folder. Data Factory did not have permission to see/use my notebook. I created the same notebook within the Shared folder and everything works fine.
I will point out that ADF should issue an error/warning if the named notebook cannot be seen or used. The ADF pipeline verified fine, reported a successful run, but just failed silently.

How to submit a Spark job on HDInsight via Powershell?

Is there a way to submit a Spark job on HDInsight via Powershell?
I know it can be done via activity in Azure Data Factory, but is there a way to submit python script to pyspark HDInsight from Powershell cmdlet?
Based on my knowledge, there is no Azure PowerShell command could do this.
You could use Apache Spark REST API, which is used to submit remote jobs to an Azure HDInsight Spark cluster. Please refer to this feedback.
HDInsight allows remote job submission through the REST API using
Livy. It is part of the recent Spark release on Linux.
https://azure.microsoft.com/en-us/documentation/articles/hdinsight-apache-spark-livy-rest-interface/