How do I get file metadata using Databricks Connect? - pyspark

I am using Azure Databricks, which I have hooked up to a data lake and I want to get metadata such as modified date for the files in the lake. I am able to do this within Databricks itself using os.stat() as detailed in this answer, but I am developing locally using Databricks connect, and can't figure out how to test it locally given it only has context of my local file system.

Related

Delete a file in sharepoint using Azure Data Factory Delete Activity

I am trying to delete a file that is located in a sharepoint directory after successful copy activity. The Delete Activity is having the following properties:
Linked Service : HTTP
DataSet : Excel
Additional Header: #{concat('Authorization: Bearer ',activity('GetToken').output.access_token)}
Here, GetToken is the Web Activity in ADF that generates a token number for accessing SharePoint.
When I am running the pipeline, I am getting the below error:
Invalid delete activity payload with 'folderPath' that is required and cannot be empty.
I have no clue on how to tackle this.
As per my understanding you are trying to delete a file in Sharepoint online using Azure Data Factory.
Currently delete activity in ADF only supports the below data stores and not sharepoint online. which is why you are receiving the above error.
Azure Blob storage
Azure Data Lake Storage Gen1
Azure Data Lake Storage Gen2
Azure Files
File System
FTP
SFTP
Amazon S3
Amazon S3 Compatible Storage
Google Cloud Storage
Oracle Cloud Storage
HDFS
Image: Delete activity Supported Data stores
Ref: Delete activity supported data sources
As a workaround you may try exploring HTTP connector. OR you can use custom activity and write your own code to delete files from SharePoint.
Hope this info helps.

Reading data from QVD using python and databricks

I am new to python,can you help me with the details of how QVD can be read into azure databricks dataframe using python.
I need the detail syntax (authentication with access key)of accessing the QVD from datalake and the read the same using qvd_reader

Linked Service with self-hosted integration runtime is not supported in data flow in Azure Data Factory

Step to reproduce:
I created a Copy Data first in the pipeline to simple transfer CSV files frol Azure VM to Azure Blob storage. I always use IRPOC1 as a connection via integration runtime and connect using SAS URI and SAS Token to my Blob Storage
After validate and run my first Copy Data, I successfully have CSV file transfer from my VM to Blob storage
I tried to add a new Data Flow after the Copy Data activity
In my Data Flow, my source is the Blob storage containing the CSV files transferred from VM, my Sink is my Azure SQL Database with successful connection
However, when I ran validation, I got the error message on my Data Flow Source:
Linked Service with self-hosted integration runtime is not supported in data flow.
I saw someone replied on Microsoft Azure Document issue Github that I need to use Copy Data to transfer data to Blob first. Then use the source from this blob with data. This is what I did but I still have the same error. Could you please let me know how I can fix this?
The Data Flow source dataset must use a Linked Service that uses an Azure IR, not a self-hosted IR.
Go to the dataset in your data flow Source, click "Open". In the dataset page, click "Edit" next to Linked Service.
In the Linked Service dialog, make sure you are using an Azure Integration Runtime, not a Self-hosted IR.

Connect to Azure SQL Database from Databricks Notebook

I wanted to load the data from Azure Blob storage to Azure SQL Database using Databricks notebook . Could anyone help me in doing this
I'm new to this, so I cannot comment, but why use Databricks for this? It would be much easier and cheaper to use Azure Data Factory.
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-dot-net
If you really need to use Databricks, you would need to either mount your Blob Storage account, or access it directly from your Databricks notebook or JAR, as described in the documentation (https://docs.azuredatabricks.net/spark/latest/data-sources/azure/azure-storage.html).
You can then read the files into DataFrames for whatever format they are in, and use the SQL JDBC connector to create a connection for writing the data to SQL (https://docs.azuredatabricks.net/spark/latest/data-sources/sql-databases.html).

Copy data from Data Lake Storage to Database present in azure Analysis server using copy activity

Is there any way to copy the data from azure data lake storage to database present in azure analysis server using azure data factory ?
I am trying to use copy activity to do the same task but I don't know how to specify the Analysis Server Database as the destination in output dataset.
For the data connector not in data factory support list, you can write custom activity to access your data
Here is the doc: https://learn.microsoft.com/en-us/azure/data-factory/data-factory-use-custom-activities
Thanks,
Charles