Error while accessing SAP data using Azure data factory CDC connector - azure-data-factory

We are trying to read data from SAP using Azure data factory change data capture(CDC) connector. We get the below error when tried to access the data. The connector works fine for full load and it fails for delta load.
Error Message: DF-SAPODP-012 - SapOdp copy activity failure with run id: XXXXXXXX-XXXX-4444-826e-XXXXX, error code: 2200 and error message: ErrorCode=SapOdpOperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Sap Odp operation 'OpenOdpRead' failed. Error Number: '013', error message: 'Error while accessing',Source=Microsoft.DataTransfer.Runtime.SapRfcHelper,', Exception: com.microsoft.dataflow.Utils$.failure(Utils.scala:76)
com.microsoft.dataflow.store.sapodp.SapOdpAdmsRequestConstructor$.executeAndMonitorCopyActivity(SapOdpAdmsRequestConstructor.scala:206)
com.microsoft.dataflow.store.sapodp.SapOdpAdmsRequestConstructor$.com$microsoft$dataflow$store$sapodp$SapOdpAdmsRequestConstructor$$executeSapCDCCopyInternal(SapOdpAdmsReque

The issue was due to the additional privileges needed for the user to read data from SAP Operational Data Provisioning (ODP) framework. The full load works as there is not need to track the changes. To solve this issue, we added authorization objects S_DHCDCACT, S_DHCDCCDS, S_DHCDCSTP to the user profile which read data from SAP.

Related

ADLS Gen2 operation failed for: An error occurred while sending the request. User error 2011

Hi I have the above error coming up when accessing storage container folder where I am trying to get the metadata of a folder and its files. It can't access the folders for some reason. checked linked service and storage container where public access is enabled and private end point is also set.
Please let me know what else is missing.
I tried to reproduce the error and got similar error.
The cause of error was the I am trying to access the ADLS gen 2 which is not available or present.
After providing correct information I am successfully able to connect ADLS Gen 2

Not able to install library on Azure databricks cluster

So, while validating the pipeline created in azure data factory.
I am facing this issue while running azure databricks linked service.
Error details:
Run result unavailable: job failed with error message Library installation failed for library due to user error for jar: "dbfs:/mnt/mopireport/TeamsAnalyticsCore-v9.jar" . Error messages: Library installation attempted on the driver node of cluster 0805-090147-ulpmkivi and failed. Please refer to the following error message to fix the library or contact Databricks support. Error Code: DRIVER_LIBRARY_INSTALLATION_FAILURE. Error Message: java.lang.Throwable: shaded.databricks.org.apache.hadoop.fs.azure.AzureException: com.microsoft.azure.storage.StorageException: This request is not authorized to perform this operation using this resource type.
Background:
This library is mounted to dbfs url (verified the mount is success)
Databricks is pointing to right cluster (verified)
I have permissions to edit and publish data pipeline in ADF.
Alternate tried:
I tried using adfss instead of wasbs for mounting, but wasbs is the right one. As it does and adfss gives error.
Directly goto cluster -> libraries -> install new -> select "DBFS/ADLS" as library source and type as JAR and try upload. this upload also fails with same message as above.

DMS is failing to connect to already working endpoint: Failed in prepare imp for Redshift Base general error

I am using DMS to migrate date from MySQL to Redshift. I need to add new task,but it does not run because endpoint connection test fails, but I use already working endpoint. Although task with this endpoint is running without error, test is failing with this strange error
Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed in prepare imp for Redshift Base general error.
Restarting DMS instance fixed the issue.

GET a Salesforce Batch request using workbench

I am using the following syntax to get a batch request for bulk data load job from performed in our dev org
https://instance_name—api.salesforce.com/services/async/APIversion/job/jobid/batch/batchId/request
In workbench I went to REST Explorer clicked GET and used the following query:
/services/async/v29.0/job/7501j000000Lb31/batch/7501g000000l0Lf
When clicking on execute, I get the following error message:
{"exceptionCode":"InvalidSessionId","exceptionMessage":"Unable to find session id"}
My end goal is to be able to pull all view request csvs from a bulk data load job instead of having to download each one of them manually
Thanks

WSO2 API MANAGER clustering Worker-Manager

This is regarding WSO2 API Manager Worker cluster configuration with external Postgres db. I have used 2 databases i.e wso2_carbon for registry and user management and the wso2_am, for storing APIs. Respective xmls have been configured. The postgres scripts have been run to create the database tables. My log console when wso2server.sh is run, shows enabled clustering and the members of the domain. However on the https://: when I try to create to create APIs, it throws and error in the design phase itself.
ERROR - add:jag org.wso2.carbon.apimgt.api.APIManagementException: Error while checking whether context exists
[2016-12-13 04:32:37,737] ERROR - ApiMgtDAO Error while locating API: admin-hello-v.1.2.3 from the database
java.sql.SQLException: org.postgres.Driver cannot be found by jdbc-pool_7.0.34.wso2v2
As per the error message, the driver class name you have given is org.postgres.Driver which is not correct. It should be org.postgresql.Driver. Double check master-datasource.xml config.