ADLS Gen2 operation failed for: An error occurred while sending the request. User error 2011 - azure-data-factory

Hi I have the above error coming up when accessing storage container folder where I am trying to get the metadata of a folder and its files. It can't access the folders for some reason. checked linked service and storage container where public access is enabled and private end point is also set.
Please let me know what else is missing.

I tried to reproduce the error and got similar error.
The cause of error was the I am trying to access the ADLS gen 2 which is not available or present.
After providing correct information I am successfully able to connect ADLS Gen 2

Related

ADF Copy data from Azure Data Bricks Delta Lake to Azure Sql Server

I'm trying to use the data copy activity to extract information from azure databricks delta lake, but I've noticed that it doesn't pass the information directly from the delta lake to the SQL server I need, but must pass it to an azure blob storage, when running it, it throws the following error
ErrorCode=AzureDatabricksCommandError,Hit an error when running the command in Azure Databricks. Error details: Failure to initialize configurationInvalid configuration value detected for fs.azure.account.key Caused by: Invalid configuration value detected for fs.azure.account.key
Looking for information I found a possible solution but it didn't work.
Invalid configuration value detected for fs.azure.account.key copy activity fails
Does anyone have any idea how the hell to pass information from an azure databricks delta lake table to a table in Sql Server??
These are some images of the structure that I have in ADF:
In the image I get a message that tells me that I must have a Storage Account to continue
These are the configuration images, and execution failed:
Conf:
Fail:
Thank you very much
The solution for this problem was the following:
Correct the way the Storage Access Key configuration was being defined:
in the instruction: spark.hadoop.fs.azure.account.key..blob.core.windows.net
The following change must be made:
spark.hadoop.fs.azure.account.key.
storageaccountname.dfs.core.windows.net
Does anyone have any idea how the hell to pass information from an azure databricks delta lake table to a table in Sql Server??
To achieve Above scenario, follow below steps:
First go to your Databricks cluster Edit it and under Advance options >> spark >> spark config Add below code if you are using blob storage.
spark.hadoop.fs.azure.account.key.<storageaccountname>.blob.core.windows.net <Accesskey>
spark.databricks.delta.optimizeWrite.enabled true
spark.databricks.delta.autoCompact.enabled true
After that as you are using SQL Database as a sink.
Enable staging and give same blob storage account linked service as Staging account linked service give storage path from your blob storage.
And then debug it. make sure you complete Prerequisites from official document.
My sample Input:
Output in SQL:

Not able to install library on Azure databricks cluster

So, while validating the pipeline created in azure data factory.
I am facing this issue while running azure databricks linked service.
Error details:
Run result unavailable: job failed with error message Library installation failed for library due to user error for jar: "dbfs:/mnt/mopireport/TeamsAnalyticsCore-v9.jar" . Error messages: Library installation attempted on the driver node of cluster 0805-090147-ulpmkivi and failed. Please refer to the following error message to fix the library or contact Databricks support. Error Code: DRIVER_LIBRARY_INSTALLATION_FAILURE. Error Message: java.lang.Throwable: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This request is not authorized to perform this operation using this resource type.
Background:
This library is mounted to dbfs url (verified the mount is success)
Databricks is pointing to right cluster (verified)
I have permissions to edit and publish data pipeline in ADF.
Alternate tried:
I tried using adfss instead of wasbs for mounting, but wasbs is the right one. As it does and adfss gives error.
Directly goto cluster -> libraries -> install new -> select "DBFS/ADLS" as library source and type as JAR and try upload. this upload also fails with same message as above.

(Unauthorized) not authorized error when setting up for Data Federation to move data to S3

I am setting up Data Federations and want to move data from cluster to S3 following guidance from How to Automate Continuous Data Copying from MongoDB to S3. I have setup my cluster and s3 as Data Sources in Data Federations. Next, I have created Linked Data Sources to service name "kaison-data-federation" for Data Federations. Next, in my triggers, I wrote the pipeline.
However, I am hitting error of "(Unauthorized) not authorized" with no exact error mentioning. I have refer to other posts like Unauth Error on Moving Data from DataLake to S3. But still not working. I have stucked here for 3 days. Can anyone provide some insight to me on this? Thank you
screenshot1
screenshot2
screenshot3

Copy file from Azure File Storage to Blob

I'm using copy data that copies a file from File Share to Blob. I always get this error:
'''
ErrorCode=UserErrorFailedToCreateAzureBlobContainer,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Unable to create Azure Blob container. Endpoint: https://xxx.blob.core.windows.net/, Container Name: temp-archive.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=Unable to connect to the remote server,Source=Microsoft.WindowsAzure.Storage,''Type=System.Net.WebException,Message=Unable to connect to the remote server,Source=System,''Type=System.Net.Sockets.SocketException,Message=A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond xx.xxx.xxx.xxx:xxx,Source=System,'
'''
Both blob and file share storage are correctly setup as I used them in my pipeline. only when I'm copying from file share to blob that this happens. Is it a limitation?
Please do these steps to identify the error:
Tested the connection of the linked services of file Storage and Blob Storage.
Preview the source data to test if the connection works well. If so, then the error is caused by sink side.
Delete and recreate the linked service of sink dataset.
Check the Blob Storage firewall if Data Factory can access it.

Connection to a S3 instance using a service-connector

I'm trying to create a service-connector to my s3 instance like this:
cf service-connector 13001 mybucketname.ds31s3.swisscom.com:443
But I get the following error:
Server-Error 403: Check of security groups failed (no access)
I have created my service key according to this documentation.
Connecting to my MongoDB works perfectly using a service connector.
You can access Swisscom's S3 directly without the service connector.
The error message suggests that your current org and space do no have access to the S3. This is usually the case is there is no app-binding for that service in the current space. Please check whether you created your service key in the right org and space.
There was a misconfiguration due to security changes. We fixed the issue, so connecting to s3 with the service-connector should now work.