enter image description hereI have a scenario where I have to check two conditions and if both are true then execute set of activities in ADF.
I tried if condition activity inside a if condition but ADF is not allowing it.
so basically my design is two lookups to read data and then if condition to check condition 1, if that is true then go inside again two lookups to read data and if condition to check condition 2. But it is not working.
is there any other work around for this?
I tried AND condition inside IF condition activity but it is not working. Please suggest.
Since we cannot use IF within IF activity, we can leverage multiple conditions within an IF activity via expressions which includes if,and ,or etc functions.
The below JSON pipeline is somewhat an example :
{
"name": "pipeline7",
"properties": {
"activities": [
{
"name": "If Condition1",
"type": "IfCondition",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"expression": {
"value": "#and(greater(pipeline().parameters.Test2, pipeline().parameters.Test1),greater(pipeline().parameters.Test4, pipeline().parameters.Test3))",
"type": "Expression"
},
"ifFalseActivities": [
{
"name": "Wait2",
"type": "Wait",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"waitTimeInSeconds": 1
}
}
],
"ifTrueActivities": [
{
"name": "Wait1",
"type": "Wait",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"waitTimeInSeconds": 1
}
}
]
}
}
],
"parameters": {
"Test1": {
"type": "int",
"defaultValue": 1
},
"Test2": {
"type": "int",
"defaultValue": 2
},
"Test3": {
"type": "int",
"defaultValue": 3
},
"Test4": {
"type": "int",
"defaultValue": 4
}
},
"annotations": []
}
}
Your approach is correct in case if you dont want to create another pipeline and use execute activity within the IF activity for another comparision
Related
I have a very simple pipeline that I have setup to test tumbling window trigger dependency. So the pipeline has a single Wait activity. Here is the pipeline code:-
{
"name": "pl-something",
"properties": {
"activities": [
{
"name": "Wait1",
"type": "Wait",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"waitTimeInSeconds": 25
}
}
],
"parameters": {
"date_id": {
"type": "string"
}
},
"annotations": []
},
"type": "Microsoft.DataFactory/factories/pipelines"
}
I have created the following hourly trigger on it which just executes it at hourly intervals:-
{
"name": "trg-hourly",
"properties": {
"annotations": [],
"runtimeState": "Started",
"pipeline": {
"pipelineReference": {
"referenceName": "pl-something",
"type": "PipelineReference"
},
"parameters": {
"date_id": "#formatDateTime(triggerOutputs().windowStartTime, 'yyyyMMddHH')"
}
},
"type": "TumblingWindowTrigger",
"typeProperties": {
"frequency": "Hour",
"interval": 1,
"startTime": "2019-11-01T00:00:00.000Z",
"delay": "00:00:00",
"maxConcurrency": 1,
"retryPolicy": {
"intervalInSeconds": 30
},
"dependsOn": []
}
}
}
The parameter date_id exists so I know exactly which hourly window a trigger instance is running for. Now this executes fine. My goal is to create another trigger on the same pipeline but which will execute as a daily thing and which depends on the hourly trigger. So that unless all the 24 hours in a day are processed , the daily trigger should not run. So in the screenshow below you can see how I am trying to setup this new trigger dependent on the hourly trigger (trg-hourly), but somehow the 'OK' button is not activated whenever I try to specify 24 hours window and you can see the error too that the window size is not valid. There is no json to show , since it's not even allowing me to create the trigger. What's the issue here?
Maybe it is expecting 1.00:00:00 instead of 0.24:00:00 because there are 24 hours in a day.
I initially successfully created the following linked service in ADFv2 of type AzureDataExplorer for accessing my database in ADX called CustomerDB:-
{
"name": "ls_AzureDataExplorer",
"properties": {
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "CustomerDB"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
This worked smoothly. Some values I had to mask for obvious reasons but just wanted to say that there is no issue with this connection. Now inspired from this Microsoft documentation I am trying to create a generic version of this linked service, which makes sense because otherwise if I have 10 databases in the cluster, I will have to create 10 different linked services.
So I tried to create the parameterized version in the following manner:-
{
"name": "ls_AzureDataExplorer_Generic",
"properties": {
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "#{linkedService().DBName}"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
But while publishing the changes I keep getting the following error:-
Is there any solution to this?
The article clearly says that:-
For all other data stores, you can parameterize the linked service by selecting the Code icon on the Connections tab and using the JSON editor
So as per that my changes should have been published successfully. But I keep getting the error.
It appears I need to specify the parameter elsewhere in the same JSON. The followed worked:-
{
"name": "ls_AzureDataExplorer_Generic",
"properties": {
"parameters": {
"DBName": {
"type": "string"
}
},
"type": "AzureDataExplorer",
"annotations": [],
"typeProperties": {
"endpoint": "https://mycluster.xxxxmaskingregionxxxx.kusto.windows.net",
"tenant": "xxxxmaskingtenantidxxxx",
"servicePrincipalId": "xxxxmaskingspxxxx",
"servicePrincipalKey": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "ls_AzureKeyVault_MyKeyVault",
"type": "LinkedServiceReference"
},
"secretName": "MySecret"
},
"database": "#{linkedService().DBName}"
}
},
"type": "Microsoft.DataFactory/factories/linkedservices"
}
I am facing the below issue in creating an Azure Machine Learning Batch Execution activity to execute a scoring ML experiment. Please help:
Please let me know if any other relevant information is needed. I am new to this so, please help
Created an AzureML Linked Service as below:
{
"name": "PredictionAzureML",
"properties": {
"typeProperties": {
"mlEndpoint": "https://ussouthcentral.services.azureml.net/workspaces/xxxxx/jobs",
"apiKey": "xxxxxxxx=="
},
"type": "AzureML"
}
}
Created Pipeline as below:
{
"name": "pipeline1",
"properties": {
"description": "use AzureML model",
"activities": [
{
"name": "MLActivity",
"description": "description",
"type": "AzureMLBatchExecution",
"policy": {
"timeout": "02:00:00",
"retry": 1,
"retryIntervalInSeconds": 30
},
"typeProperties": {
"webServiceInput": "PredictionInputDataset",
"webServiceOutputs": {
"output1": "PredictionOutputDataset"
}
},
"inputs": [
{
"name": "PredictionInputDataset"
}
],
"outputs": [
{
"name": "PredictionOutputDataset"
}
],
"linkedServiceName": "PredictionAzureML"
}
]
}
}
Getting the below error:
{
"errorCode": "2109",
"message": "'linkedservicereference' with reference name 'PredictionAzureML' can't be found.",
"failureType": "UserError",
"target": "MLActivity"
}
I got this working in Data Factory v2, so apologies if you are using v1.
Try putting the linkedServiceName as an object in the JSON outside of the typeProperties and use the following structure:
"linkedServiceName": {
"referenceName": "PredictionAzureML",
"type": "LinkedServiceReference"
}
Hope that helps!
Please use "Trigger" instead of "Debug" in the UX. You need publish your pipeline first before click "Trigger" Button.
Please follow this doc to update your payload. It should look like the following.
{
"name": "AzureMLExecutionActivityTemplate",
"description": "description",
"type": "AzureMLBatchExecution",
"linkedServiceName": {
"referenceName": "AzureMLLinkedService",
"type": "LinkedServiceReference"
},
"typeProperties": {
"webServiceInputs": {
"<web service input name 1>": {
"LinkedServiceName":{
"referenceName": "AzureStorageLinkedService1",
"type": "LinkedServiceReference"
},
"FilePath":"path1"
},
"<web service input name 2>": {
"LinkedServiceName":{
"referenceName": "AzureStorageLinkedService1",
"type": "LinkedServiceReference"
},
"FilePath":"path2"
}
},
"webServiceOutputs": {
"<web service output name 1>": {
"LinkedServiceName":{
"referenceName": "AzureStorageLinkedService2",
"type": "LinkedServiceReference"
},
"FilePath":"path3"
},
"<web service output name 2>": {
"LinkedServiceName":{
"referenceName": "AzureStorageLinkedService2",
"type": "LinkedServiceReference"
},
"FilePath":"path4"
}
},
"globalParameters": {
"<Parameter 1 Name>": "<parameter value>",
"<parameter 2 name>": "<parameter 2 value>"
}
}
}
I'm working on an Azure Data Factory V2 Pipeline but I having a problem when I try to execute a "Custom activity" inside an "If Condition Activity".
If I try to test my pipeline with "Test Run" button on the ADF's Web interface, this error appeare:
{"code":"BadRequest","message":"Activity PPL_ANYFBRF01 failed: Invalid linked service reference. Name: LNK_BATCH_AZURE","target"...}
I'm sure that there is no error in the linked service reference's name. If I create a "Custom Activity" directly in my pipeline, it's working.
I think it can be a syntax error on my activity but I can't find it.
Here is my "If Condition Activity"'s Json template (the expression "#equal(0,0)" is just for testing purpose):
{
"name": "IfPointComptageNotExist",
"type": "IfCondition",
"dependsOn": [
{
"activity": "PointComptage",
"dependencyConditions": [
"Succeeded"
]
},
{
"activity": "SousPointComptage",
"dependencyConditions": [
"Succeeded"
]
}
],
"typeProperties": {
"expression": {
"value": "#equal(0,0)",
"type": "Expression"
},
"ifTrueActivities": [
{
"type": "Custom",
"name": "CustomActivityTest",
"linkedServiceName": {
"referenceName": "LNK_BATCH_AZURE",
"type": "LinkedServiceReference"
},
"typeProperties": {
"command": "Batch.exe",
"resourceLinkedService": {
"referenceName": "LNK_BLOB_STORAGE",
"type": "LinkedServiceReference"
},
"folderPath": "/test/app/"
}
}
]
}
},
Thank you in advance for your help.
The problem is now solved. I have recreate the pipeline and it's working now.
Regards,
Julien.
I need to write the JSON Schema based on the specification defined by http://json-schema.org/. But I'm struggling for the required/mandatory property validation. Below is the JSON schema that I have written where all the 3 properties are mandatory but In my case either one should be mandatory. How to do this?.
{
"id": "http://example.com/searchShops-schema#",
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "searchShops Service",
"description": "",
"type": "object",
"properties": {
"city":{
"type": "string"
},
"address":{
"type": "string"
},
"zipCode":{
"type": "integer"
}
},
"required": ["city", "address", "zipCode"]
}
If your goal is to tell that "I want at least one member to exist" then use minProperties:
{
"type": "object",
"etc": "etc",
"minProperties": 1
}
Note also that you can use "dependencies" to great effect if you also want additional constraints to exist when this or that member is present.
{
...
"anyOf": [
{ "required": ["city"] },
{ "required": ["address"] },
{ "required": ["zipcode"] },
]
}
Or use "oneOf" if exactly one property should be present