Azure Data Factory - Batch Accounts - BlobAccessDenied - azure-data-factory

I'm trying to work with a custom activity in Data Factory to execute in a batch accounts pool a python batch stored in a blob storage.
I followed the Microsoft tutorial https://learn.microsoft.com/en-us/azure/batch/tutorial-run-python-batch-azure-data-factory
My problem is when I execute the ADF pipeline the activity failed:
When I check in the Batch Explorer tool, I got this BlobAccessDenied message:
Depending of the execution, it happens on all ADF reference files but also for my batch file.
I have linked the Storage Account to the Batch Accounts
I'm new to this and I'm not sure of what I must do to solve this.
Thank you in advance for your help.

I tried to reproduce the issue and it is working fine for me.
Please check the following points while creating the pipeline.
Check if you have pasted storage account connection string at line number 6 in main.py file
You need to create a Blob Storage and a Batch Linked Services in the Azure Data Factory(ADF). These linked services will be required in “Azure Batch” and “Settings” Tabs when configure ADF Pipeline. Please follow below snapshots to create Linked Services.
In ADF Portal, click on left ‘Manage’ symbol and then click on +New to create Blob Storage linked service.
Search for “Azure Blob Storage” and then click on Continue
Fill the required details as per your Storage account, test the connection and then click on apply.
Similarly, search for Azure Batch Linked Service (under Compute tab).
Fill the details of your batch account, use the previously created Storage Linked service under “Storage linked service name” and then test the connection. Click on save.
Later, when you will create custom ADF pipeline, under “Azure Batch” Tab, provide the Batch Linked Service Name.
Under “Settings” Tab, provide the Storage Linked Service name and other required information. In "Folder Path", provide the Blob name where you have main.py and iris.csv files.
Once this is done, you can Validate, Debug, Publish and Trigger the pipeline. Pipeline should run successfully.
Once pipeline ran successfully, you will see the iris_setosa.csv file in your output Blob.

Related

How to copy blob file to SAS URL in a Synapse pipeline

I have a blob zip file in my storage account, I have a linked service and binary dataset to get the file as the source in a copy activity. There is an outside service I call in a web activity that returns a writable SAS URL to a different storage account in this format.
https://foo.blob.core.windows.net/dmf/43de9fb6-3b96-4f47-b730-eb8de040859dblah.zip?sv=2014-02-14&sr=b&sig=0mgvh25htg45b5u4ty5E%2Bf0ahMwFkHVy3iTC2nh%2FIKw%3D&st=2022-08-13T02%3A19%3A33Z&se=2022-08-13T02%3A54%3A33Z&sp=rw
I tried adding a SAS azure blob linked service, I added a parameter for the uri on the LS, then added a dataset bound to the LS and also added a parameter for the uri, I pass the SAS uri dynamically all the way down to the linked service. The copy fails each time with The remote server returned an error: (403). I have to be doing something wrong but not sure what it is. I'd appreciate any input, thanks.
I tried to reproduce the same in my environment and got same error:
To resolve the above 403 error, you need to enable it from all network option and also check whether the Storage blob data contributor was added or not. If not , Go to Azure Storage Account -> Access control (IAM) -> +Add role assignment as Storage blob data contributor.
Now, its working.

Azure DevOps service connection bug

I have created an "AWS for Terraform" Azure DevOps service connection to be used for authenticating with AWS. The region I used for the configuration is ap-southeast-2. This is where my S3 bucket containing my terraform state file resides.
After creating the connection, I try to create a release pipeline using a terraform init step.
I select the service connection that I just created from the drop down menu. When I click on the Bucket drop down menu "no results found" is returned. If I change the service connection's region to be us-east-1, the drop down displays a list of buckets (including the one I want). This makes absolutely no sense to me and wondered if anyone could explain?
The issue is that when I select my bucket from the drop down menu, the steps fails when it executes.
There is an apparent "work around" suggested on this page but it does not work for me. Any advice is appreciated.

error browsing directory under ADLS Gen2 container for Azure Data Factory

I am creating a dataset in Azure Data Factory. This dataset will be a Parquet file within a directory under a certain container in an ADLS Gen2 account. The container name is 'raw', and the directory that I want to place the file into is source/system1/FullLoad. When I click on Browse next to File path, I am able to access the container, but I cannot access the directory. When I hit folder 'source', I get the error shown below.
How can I drill to the desired directory? As the error message indicates, I suspect that it's something to do with permissions to access the data (the Parquet file doesn't exist yet, as it will be used as a sink in a copy activity that hasn't been run yet), but I don't know how to resolve.
Thanks for confirming putting the resolution for others if anyone face this issue.
The user or managed identity you are using for your data factory should have storage data blob contributor access on the storage account. You can check it from azure portal, go to your storage account, navigate to the container and then directory, click on Access Control on the left panel and check role assignment. If it is missing add the role assignment of storage data blob contributor to your managed identity.

How to grant access to Azure File Copy of Azure Pipeline to Azure Storage?

I would like to copy files with Azure File Copy with Azure Pipeline.
I'm following instruction of https://praveenkumarsreeram.com/2021/04/14/azure-devops-copy-files-from-git-repository-to-azure-storage-account/
I'm using automatically created Service Connection named "My Sandbox (a1111e1-d30e-4e02-b047-ef6a5e901111)"
I'm getting error with AzureBlob File Copy:
INFO: Authentication failed, it is either not correct, or
expired, or does not have the correct permission ->
github.com/Azure/azure-storage-blob-go/azblob.newStorageError,
/home/vsts/go/pkg/mod/github.com/!azure/azure-storage-blob-
go#v0.10.1-0.20201022074806-
8d8fc11be726/azblob/zc_storage_error.go:42
RESPONSE Status: 403 This request is not authorized to perform
this operation using this permission.
I'm assuming that Azure Pipeline have no access to Azure Storage.
I wonder how do find service principal which should get access to Azure Storage.
I can also reproduce your issue on my side, as different Azure file copy task versions use different versions of AzCopy in behind, then they use different auth ways to call the API to do the operations.
There are two ways to fix the issue.
If you use the automatically created service connection, it should have Contributor role in your storage account, you could use Azure file copy task version 3.* instead of 4.*, then it will work.
If you want to use Azure file copy task version 4.*, navigate to your storage account -> Access Control (IAM) -> add your service principal used in the service connection as a Storage Blob Data Contributor role, see detailed steps here. It will also work.

Linked Service with self-hosted integration runtime is not supported in data flow in Azure Data Factory

Step to reproduce:
I created a Copy Data first in the pipeline to simple transfer CSV files frol Azure VM to Azure Blob storage. I always use IRPOC1 as a connection via integration runtime and connect using SAS URI and SAS Token to my Blob Storage
After validate and run my first Copy Data, I successfully have CSV file transfer from my VM to Blob storage
I tried to add a new Data Flow after the Copy Data activity
In my Data Flow, my source is the Blob storage containing the CSV files transferred from VM, my Sink is my Azure SQL Database with successful connection
However, when I ran validation, I got the error message on my Data Flow Source:
Linked Service with self-hosted integration runtime is not supported in data flow.
I saw someone replied on Microsoft Azure Document issue Github that I need to use Copy Data to transfer data to Blob first. Then use the source from this blob with data. This is what I did but I still have the same error. Could you please let me know how I can fix this?
The Data Flow source dataset must use a Linked Service that uses an Azure IR, not a self-hosted IR.
Go to the dataset in your data flow Source, click "Open". In the dataset page, click "Edit" next to Linked Service.
In the Linked Service dialog, make sure you are using an Azure Integration Runtime, not a Self-hosted IR.