I have created a pipeline in Azure Data Factory. I created a Databricks workspace, notebook (with some code), and a cluster. I created the connection from ADF to DB. I tested the connection. All lights are green. I published the ADF pipeline.
When I trigger the job, it says SUCCESS. But nothing happens in Databricks. No job is created in DB. The code in the notebook cell is apparently not executed. (I know this because the code prints the current time.)
Has anyone done this successfully?
To be clear, I want Data Factory to use an existing cluster in Databricks, not create a new one. I have named the cluster in the pipeline setup params.
Please reference this tutorial: Run a Databricks notebook with the Databricks Notebook Activity in Azure Data Factory.
In this tutorial, you use the Azure portal to create an Azure Data Factory pipeline that executes a Databricks notebook against the Databricks jobs cluster. It also passes Azure Data Factory parameters to the Databricks notebook during execution.
You perform the following steps in this tutorial:
Create a data factory.
Create a pipeline that uses Databricks Notebook Activity.
Trigger a pipeline run.
Monitor the pipeline run.
One of the difference is you don't need to create new job cluster, select use an existing cluster.
Hope this helps.
Solved. The problem was that the notebook (containing my code) was within my User notebook folder. Data Factory did not have permission to see/use my notebook. I created the same notebook within the Shared folder and everything works fine.
I will point out that ADF should issue an error/warning if the named notebook cannot be seen or used. The ADF pipeline verified fine, reported a successful run, but just failed silently.
Related
I need to pass azure synapse notebook name as input (dynamically) in Azure data factory.
Please assist.
The above approach will work for you in synapse and in Azure Data factory as well.
This my repro for your reference.
In Synapse pipeline:
My Synapse Notebook:
Set variable for notebook name:
Notebook activity:
Synapse spark Notebook executed after pipeline execution:
In ADF pipeline:
In ADF pipeline, you have to create a linked service for Synapse spark notebook. only after that you can access the synapse spark pools and synapse notebooks.
Here also, it supports giving dynamic content and we can give the same as #variables("variable_name") for the name of the Notebook as above.
NOTE: Please make sure you publish your Synapse spark notebook in order to reflect your recent changes. While calling Synapse spark notebook in ADF, make sure you follow these Prerequisites from Microsoft documentation.
I have developed ML predictive model on historical data in Azure Databricks using python notebook.
Which means i have done data extraction, preparation, feature engineering and model training everything done in Databricks using python notebook.
I have almost completed development part of it, now we want to deploy ML model into production using ansible roles.
To deploy to AzureML you need to build the image from the MLflow model - it's done by using the mlflow.azureml.build_image function of MLflow. After that you can deploy it to Azure Container Instances (ACI) or Azure Kubernetes Service by using client.create_deployment function of MLflow (see Azure docs). There is also mlflow.azureml.deploy function that is doing everything in one step.
This blog post & example notebook that show the code for full process of training/testing/deployment of the model using MLflow & AzureML.
We execute Databricks notebook pipelines using Azure data factory. We have configured 'Log delivery' option to get logs on DBFS. Currently, when two pipeline runs simelteneouly we are not clearly able to segregate logs per pipeline. It is possible using spark, when the instance is readily available in the databricks, to point logs directory to be ex /var/spark/{random_id}/logs/ ?
Job clusters in Databricks linked service Azure Data Factory are only uploading one init script even though I have two in my configuration. I believe this a recent bug in ADF as my setup was uploading the two scripts one week ago but it is not anymore. Also I tested Databricks clusters API and I can upload two scripts.
Databricks Init Scripts Set up in Azure Data Factory
We have a requirement that while provisioning the Databricks service thru CI/CD pipeline in Azure DevOps we should able to mount a blob storage to DBFS without connecting to a cluster. Is it possible to mount object storage to DBFS cluster by using a bash script from Azure DevOps ?
I looked thru various forums but they all mention about doing this using dbutils.fs.mount but the problem is we cannot run this command in Azure DevOps CI/CD pipeline.
Will appreciate any help on this.
Thanks
What you're asking is possible but it requires a bit of extra work. In our organisation we've tried various approaches and I've been working with Databricks for a while. The solution that works best for us is to write a bash script that makes use of the databricks-cli in your Azure Devops pipeline. The approach we have is as follows:
Retrieve a Databricks token using the token API
Configure the Databricks CLI in the CI/CD pipeline
Use Databricks CLI to upload a mount script
Create a Databricks job using the Jobs API and set the mount script as file to execute
The steps above are all contained in a bash script that is part of our Azure Devops pipeline.
Setting up the CLI
Setting up the Databricks CLI without any manual steps is now possible since you can generate a temporary access token using the Token API. We use a Service Principal for authentication.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/tokens
Create a mount script
We have a scala script that follows the mount instructions. This can be Python as well. See the following link for more information:
https://docs.databricks.com/data/data-sources/azure/azure-datalake-gen2.html#mount-azure-data-lake-storage-gen2-filesystem.
Upload the mount script
In the Azure Devops pipeline the databricks-cli is configured by creating a temporary token using the token API. Once this step is done, we're free to use the CLI to upload our mount script to DBFS or import it as a notebook using the Workspace API.
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/workspace#--import
Configure the job that actually mounts your storage
We have a JSON file that defines the job that executes the "mount storage" script. You can define a job to use the script/notebook that you've uploaded in the previous step. You can easily define a job using JSON, check out how it's done in the Jobs API documentation:
https://learn.microsoft.com/en-US/azure/databricks/dev-tools/api/latest/jobs#--
At this point, triggering the job should create a temporary cluster that mounts the storage for you. You should not need to use the web interface, or perform any manual steps.
You can apply this approach to different environments and resource groups, as do we. For this we make use of Jinja templating to fill out variables that are environment or project specific.
I hope this helps you out. Let me know if you have any questions!