Related
I have a requirement as ADLS container contains a folder with multiple files
Assume
Product.csv
Market.csv
3.Sales.csv
I need to consider file names Product, Market, Sales and form as Product_Market_Sales.csv in a destination path.
I tried multiple ways to achieve this.
Can anyone help me.
Thanks
Shanu
If all of your files are having same schema, then the below approach will work for you.
These are my variables in the pipeline.
First use Get Meta data activity to get the ChildItems list from the folder.
It will give the array like below in an alphabetical order of file names.
Then, Give this array to a ForEach activity. Inside ForEach, split each file name with '.' or '.c' and store the first element of split in an append variable activity to res_filename variable using below dynamic content.
#split(item().name,'.')[0]
Now, join this array with '_' and store it in a string variable.
#concat(join(variables('names'),'_'),'.csv')
After this, use copy activity and give wild card path *.csv in the source. Use dataset parameters for the sink dataset and give the above variable as file name and give the Copy behavior as Merge files.
This is my Pipeline JSON
{
"name": "filenames",
"properties": {
"activities": [
{
"name": "Get Metadata1",
"type": "GetMetadata",
"dependsOn": [],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"dataset": {
"referenceName": "Sourcefiles",
"type": "DatasetReference"
},
"fieldList": [
"childItems"
],
"storeSettings": {
"type": "AzureBlobFSReadSettings",
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
}
},
{
"name": "ForEach1",
"type": "ForEach",
"dependsOn": [
{
"activity": "Get Metadata1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"items": {
"value": "#activity('Get Metadata1').output.childItems",
"type": "Expression"
},
"isSequential": true,
"activities": [
{
"name": "Append variable1",
"type": "AppendVariable",
"dependsOn": [],
"userProperties": [],
"typeProperties": {
"variableName": "names",
"value": {
"value": "#split(item().name,'.')[0]",
"type": "Expression"
}
}
}
]
}
},
{
"name": "Set variable1",
"type": "SetVariable",
"dependsOn": [
{
"activity": "ForEach1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "res_filename",
"value": {
"value": "#concat(join(variables('names'),'_'),'.csv')",
"type": "Expression"
}
}
},
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [
{
"activity": "Set variable1",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"storeSettings": {
"type": "AzureBlobFSReadSettings",
"recursive": true,
"wildcardFileName": "*.csv",
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
},
"sink": {
"type": "DelimitedTextSink",
"storeSettings": {
"type": "AzureBlobFSWriteSettings",
"copyBehavior": "MergeFiles"
},
"formatSettings": {
"type": "DelimitedTextWriteSettings",
"quoteAllText": true,
"fileExtension": ".txt"
}
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"typeConversion": true,
"typeConversionSettings": {
"allowDataTruncation": true,
"treatBooleanAsNumber": false
}
}
},
"inputs": [
{
"referenceName": "Sourcefiles",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "target",
"type": "DatasetReference",
"parameters": {
"targetfilename": {
"value": "#variables('res_filename')",
"type": "Expression"
}
}
}
]
}
],
"variables": {
"names": {
"type": "Array"
},
"res_filename": {
"type": "String"
}
},
"annotations": []
}
}
Result file:
If you want to merge the above with different source schemas, then as per my knowledge its better do that by code as suggested in comments with databricks or Azure functions.
Below is the output of Get Metadata activity which contains name and type values for child items:
Is it possible to just get the name values and stored within an array variable without using any iteration.
Output = [csv1.csv,csv2.csv,csv3.csv,csv4.csv]
This was achieved via Foreach and append variable, we don't want to use iterations.
APPROACH 1 :
Using for each would be easier to complete the job. However, you can use string manipulation in the following way to get the desired result.
Store the output of get metadata child items in a variable as a string:
#string(activity('Get Metadata1').output.childItems)
Now replace all the unnecessary data with empty string '' using the following dynamic content:
#replace(replace(replace(replace(replace(replace(replace(replace(variables('tp'),'[{',''),'}]',''),'{',''),'}',''),'"type":"File"',''),'"',''),'name:',''),',,',',')
Now, ignore the last comma and split the above string with , as delimiter.
#split(substring(variables('ans'),0,sub(length(variables('ans')),1)),',')
APPROACH 2 :
Let's say your source has a combination of folders and files, and you want only the names of objects whose type is File in an array, then you can use the following approach. Here there is no need of for each, but you will have to use copy data and dataflows.
Create a copy data activity with a sample file with data like below:
Now create an additional column my_json with value as the following dynamic content:
#replace(string(activity('Get Metadata1').output.childItems),'"',pipeline().parameters.single_quote)
The following is the sink dataset configuration that I have taken:
In the mapping, just select this newly created column and remove the rest (demo) column.
Once this copy data executes, the file generated will be as shown below:
In dataflow, with the above file as source with settings as shown in the below image:
The data would be read as shown below:
Now, use aggregate transformation to group by the type column and collect() on name column.
The result would be as shown below:
Now, use conditional split to separate the file type data and folder type data with the condition type == 'File'
Now write the fileType data to sink cache. The data would look like this:
Back in the pipeline, use the following dynamic content to get the required array:
#activity('Data flow1').output.runStatus.output.sink1.value[0].array_of_types
Pipeline JSON for reference:
{
"name": "pipeline3",
"properties": {
"activities": [
{
"name": "Get Metadata1",
"type": "GetMetadata",
"dependsOn": [],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"dataset": {
"referenceName": "source1",
"type": "DatasetReference"
},
"fieldList": [
"childItems"
],
"storeSettings": {
"type": "AzureBlobFSReadSettings",
"recursive": true,
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
}
},
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [
{
"activity": "Get Metadata1",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"additionalColumns": [
{
"name": "my_json",
"value": {
"value": "#replace(string(activity('Get Metadata1').output.childItems),'\"',pipeline().parameters.single_quote)",
"type": "Expression"
}
}
],
"storeSettings": {
"type": "AzureBlobFSReadSettings",
"recursive": true,
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
},
"sink": {
"type": "DelimitedTextSink",
"storeSettings": {
"type": "AzureBlobFSWriteSettings"
},
"formatSettings": {
"type": "DelimitedTextWriteSettings",
"quoteAllText": true,
"fileExtension": ".txt"
}
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "my_json",
"type": "String"
},
"sink": {
"type": "String",
"physicalType": "String",
"ordinal": 1
}
}
],
"typeConversion": true,
"typeConversionSettings": {
"allowDataTruncation": true,
"treatBooleanAsNumber": false
}
}
},
"inputs": [
{
"referenceName": "csv1",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "sink1",
"type": "DatasetReference"
}
]
},
{
"name": "Data flow1",
"type": "ExecuteDataFlow",
"dependsOn": [
{
"activity": "Copy data1",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "0.12:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"dataflow": {
"referenceName": "dataflow2",
"type": "DataFlowReference"
},
"compute": {
"coreCount": 8,
"computeType": "General"
},
"traceLevel": "None"
}
},
{
"name": "Set variable2",
"type": "SetVariable",
"dependsOn": [
{
"activity": "Data flow1",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"variableName": "req",
"value": {
"value": "#activity('Data flow1').output.runStatus.output.sink1.value[0].array_of_types",
"type": "Expression"
}
}
}
],
"parameters": {
"single_quote": {
"type": "string",
"defaultValue": "'"
}
},
"variables": {
"req": {
"type": "Array"
},
"tp": {
"type": "String"
},
"ans": {
"type": "String"
},
"req_array": {
"type": "Array"
}
},
"annotations": [],
"lastPublishTime": "2023-02-03T06:09:07Z"
},
"type": "Microsoft.DataFactory/factories/pipelines"
}
Dataflow JSON for reference:
{
"name": "dataflow2",
"properties": {
"type": "MappingDataFlow",
"typeProperties": {
"sources": [
{
"dataset": {
"referenceName": "Json3",
"type": "DatasetReference"
},
"name": "source1"
}
],
"sinks": [
{
"name": "sink1"
}
],
"transformations": [
{
"name": "aggregate1"
},
{
"name": "split1"
}
],
"scriptLines": [
"source(output(",
" name as string,",
" type as string",
" ),",
" allowSchemaDrift: true,",
" validateSchema: false,",
" ignoreNoFilesFound: false,",
" documentForm: 'arrayOfDocuments',",
" singleQuoted: true) ~> source1",
"source1 aggregate(groupBy(type),",
" array_of_types = collect(name)) ~> aggregate1",
"aggregate1 split(type == 'File',",
" disjoint: false) ~> split1#(fileType, folderType)",
"split1#fileType sink(validateSchema: false,",
" skipDuplicateMapInputs: true,",
" skipDuplicateMapOutputs: true,",
" store: 'cache',",
" format: 'inline',",
" output: true,",
" saveOrder: 1) ~> sink1"
]
}
}
}
I have a pipeline that will iterate over files and copy them to a storage location.
The baseURL and relativeURL are stored in a json file.
I can read in this file and it is valid.
I have parameterized the linked service baseURL and this works when testing from the linked service, and from the dataset.
When I try to debug the pipeline however, I get an error:
"code":"BadRequest"
"message":null
"target":"pipeline//runid/310b8ac1-2ce6-4c7c-a1ad-433ee9019e9b"
"details":null
"error":null
It appears that from the activity in the pipeline, a null value is being passed instead of the baseURL.
I have iterated over the values from my config file and it is being read and the values are correct. It really seems like the pipeline is not passing in the correct value for baseURL.
Do I have to modify the json code behind the pipeline to get this to work?
If it helps, the json for the linked service, data set and pipeline are below:
--Linked Service:
{
"name": "ls_http_opendata_ecdc_europe_eu",
"properties": {
"parameters": {
"baseURL": {
"type": "string"
}
},
"annotations": [],
"type": "HttpServer",
"typeProperties": {
"url": "#linkedService().baseURL",
"enableServerCertificateValidation": true,
"authenticationType": "Anonymous"
}
}
}
--dataset
{
"name": "ds_ecdc_raw_csv_http",
"properties": {
"linkedServiceName": {
"referenceName": "ls_http_opendata_ecdc_europe_eu",
"type": "LinkedServiceReference"
},
"parameters": {
"relativeURL": {
"type": "string"
},
"baseURL": {
"type": "string"
}
},
"annotations": [],
"type": "DelimitedText",
"typeProperties": {
"location": {
"type": "HttpServerLocation",
"relativeUrl": {
"value": "#dataset().relativeURL",
"type": "Expression"
}
},
"columnDelimiter": ",",
"escapeChar": "\\",
"firstRowAsHeader": true,
"quoteChar": "\""
},
"schema": []
}
}
--pipeline
{
"name": "pl_ingest_ecdc_data",
"properties": {
"activities": [
{
"name": "lookup ecdc filelist",
"type": "Lookup",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "JsonSource",
"storeSettings": {
"type": "AzureBlobStorageReadSettings",
"recursive": true,
"enablePartitionDiscovery": false
},
"formatSettings": {
"type": "JsonReadSettings"
}
},
"dataset": {
"referenceName": "ds_ecdc_file_list",
"type": "DatasetReference"
},
"firstRowOnly": false
}
},
{
"name": "execute copy for every record",
"type": "ForEach",
"dependsOn": [
{
"activity": "lookup ecdc filelist",
"dependencyConditions": [
"Succeeded"
]
}
],
"userProperties": [],
"typeProperties": {
"items": {
"value": "#activity('lookup ecdc filelist').output.value",
"type": "Expression"
},
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"storeSettings": {
"type": "HttpReadSettings",
"requestMethod": "GET"
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
},
"sink": {
"type": "DelimitedTextSink",
"storeSettings": {
"type": "AzureBlobFSWriteSettings"
},
"formatSettings": {
"type": "DelimitedTextWriteSettings",
"quoteAllText": true,
"fileExtension": ".txt"
}
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"typeConversion": true,
"typeConversionSettings": {
"allowDataTruncation": true,
"treatBooleanAsNumber": false
}
}
},
"inputs": [
{
"referenceName": "DelimitedText1",
"type": "DatasetReference",
"parameters": {
"sourceBaseURL": {
"value": "#item().sourceBaseURL",
"type": "Expression"
},
"sourceRelativeURL": {
"value": "#item().sourceRelativeURL",
"type": "Expression"
}
}
}
],
"outputs": [
{
"referenceName": "ds_ecdc_raw_csv_dl",
"type": "DatasetReference",
"parameters": {
"fileName": {
"value": "#item().sinkFileName",
"type": "Expression"
}
}
}
]
}
]
}
}
],
"concurrency": 1,
"annotations": []
}
}
I reproduced your error.
{"code":"BadRequest","message":null,"target":"pipeline//runid/abd35329-3625-490b-85cf-f6d0de3dac86","details":null,"error":null}
It is because you didn't pass your baseURL to link service in Source Dataset. Please do this:
And Dataset JSON code should be like this:
{
"name": "ds_ecdc_raw_csv_http",
"properties": {
"linkedServiceName": {
"referenceName": "ls_http_opendata_ecdc_europe_eu",
"type": "LinkedServiceReference",
"parameters": {
"baseURL": {
"value": "#dataset().baseURL",
"type": "Expression"
}
}
},
"parameters": {
"relativeURL": {
"type": "string"
},
"baseURL": {
"type": "string"
}
},
"annotations": [],
"type": "DelimitedText",
"typeProperties": {
"location": {
"type": "HttpServerLocation",
"relativeUrl": {
"value": "#dataset().relativeURL",
"type": "Expression"
}
},
"columnDelimiter": ",",
"escapeChar": "\\",
"firstRowAsHeader": true,
"quoteChar": "\""
},
"schema": []
}
}
I tried all combination on the datatype of my data but each time my data factory pipeline is giving me this error:
{
"errorCode": "2200",
"message": "ErrorCode=UserErrorColumnNameNotAllowNull,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Empty or Null string found in Column Name 2. Please make sure column name not null and try again.,Source=Microsoft.DataTransfer.Common,'",
"failureType": "UserError",
"target": "xxx",
"details": []
}
My Copy data source code is something like this:{
"name": "xxx",
"description": "uuu",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"storeSettings": {
"type": "AzureBlobStorageReadSettings",
"recursive": true,
"wildcardFileName": "*"
},
"formatSettings": {
"type": "DelimitedTextReadSettings"
}
},
"sink": {
"type": "AzureSqlSink"
},
"enableStaging": false,
"translator": {
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "populationId",
"type": "Guid"
},
"sink": {
"name": "PopulationID",
"type": "String"
}
},
{
"source": {
"name": "inputTime",
"type": "DateTime"
},
"sink": {
"name": "inputTime",
"type": "DateTime"
}
},
{
"source": {
"name": "inputCount",
"type": "Decimal"
},
"sink": {
"name": "inputCount",
"type": "Decimal"
}
},
{
"source": {
"name": "inputBiomass",
"type": "Decimal"
},
"sink": {
"name": "inputBiomass",
"type": "Decimal"
}
},
{
"source": {
"name": "inputNumber",
"type": "Decimal"
},
"sink": {
"name": "inputNumber",
"type": "Decimal"
}
},
{
"source": {
"name": "utcOffset",
"type": "String"
},
"sink": {
"name": "utcOffset",
"type": "Int32"
}
},
{
"source": {
"name": "fishGroupName",
"type": "String"
},
"sink": {
"name": "fishgroupname",
"type": "String"
}
},
{
"source": {
"name": "yearClass",
"type": "String"
},
"sink": {
"name": "yearclass",
"type": "String"
}
}
]
}
},
"inputs": [
{
"referenceName": "DelimitedTextFTDimensions",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "AzureSqlTable1",
"type": "DatasetReference"
}
]
}
Can anyone please help me understand the issue. I see in some blogs they ask me use treatnullasempty but I am not allowed to modify the JSON. is there a way to do that??
I suggest to using Data Flow DerivedColumn, DerivedColumn can help you build expression to replace the null column.
For example:
Derived Column, if Column_2 is null =true, return 'dd' :
iifNull(Column_2,'dd')
Mapping the column
Reference: Data transformation expressions in mapping data flow
Hope this helps.
fixed it.it was a easy fix as one of my column in destination was marked as not null, i changed it as null and it worked.
As mentioned in the below link, I am triggering Lookup first. It gives me Email Ids and then for each of the email id, I am invoking the POST request.
Iterating Through azure SQL table in Azure Data Factory
I have mentioned #pipeline().parameters.tableList in the items in the settings of the For each. And I after for each I have set an email notification to check the output of #pipeline().parameters.leadList. I am getting it correctly so far so good.
But when i am using #item() it is returning me null.
I am confused why #item() giving me null even though
#pipeline().parameters.leadList in child pipeline giving me correct
value?
And I followed this approach: https://learn.microsoft.com/en-us/azure/data-factory/tutorial-bulk-copy-portal
Parent pipeline
{
"name": "LookupToSF",
"properties": {
"activities": [
{
"name": "LookupToSF",
"description": "Retrieve the Lead name and email ids from the Lead table of the Salesforce",
"type": "Lookup",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"source": {
"type": "SalesforceSource",
"query": "select name, email from lead where email='abcd#xxxx.com'",
"readBehavior": "query"
},
"dataset": {
"referenceName": "SalesforceObjectToAddPersonAskNicely",
"type": "DatasetReference"
},
"firstRowOnly": false
}
},
{
"name": "TriggerForEachLead",
"type": "ExecutePipeline",
"dependsOn": [
{
"activity": "LookupToSF",
"dependencyConditions": [
"Succeeded"
]
}
],
"typeProperties": {
"pipeline": {
"referenceName": "SendSurvey",
"type": "PipelineReference"
},
"waitOnCompletion": true,
"parameters": {
"leadList": {
"value": "#activity('LookupToSF').output.value",
"type": "Expression"
}
}
}
}
]
}
}
**
Child pipeline
{
"name": "SendSurvey",
"properties": {
"activities": [
{
"name": "ForEachLead",
"type": "ForEach",
"typeProperties": {
"items": {
"value": "#pipeline().parameters.leadList",
"type": "Expression"
},
"isSequential": true,
"activities": [
{
"name": "WebActivityToAddPerson",
"type": "WebActivity",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"url": "https://xxxx.asknice.ly/api/v1/person/trigger",
"method": "POST",
"headers": {
"X-apikey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"data": {
"email": "#{item().Email}",
"addperson": "true"
}
}
},
{
"name": "WebActivityForErrorAddingPerson",
"type": "WebActivity",
"dependsOn": [
{
"activity": "WebActivityToAddPerson",
"dependencyConditions": [
"Failed"
]
}
],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"url": "https://prod-07.australiaeast.logic.azure.com:443/workflows/xxxxxxxxxxxxx",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"Status": "Failure",
"message": "#{activity('WebActivityToAddPerson').output}",
"subject": "Failure in adding"
}
}
},
{
"name": "WebActivityToSendSurvey",
"type": "WebActivity",
"dependsOn": [
{
"activity": "WebActivityToAddPerson",
"dependencyConditions": [
"Completed"
]
}
],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"url": "https://xxxxxxxx.asknice.ly/api/v1/person/trigger",
"method": "POST",
"headers": {
"X-apikey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
},
"data": "Email=#{item().Email}&triggeremail=true "
}
},
{
"name": "WebActivityForErrorSendingSurvey",
"type": "WebActivity",
"dependsOn": [
{
"activity": "WebActivityToSendSurvey",
"dependencyConditions": [
"Failed"
]
}
],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"url": "https://prod-07.australiaeast.logic.azure.com:443/workflows/xxxxxxxxxxxxxxxxx",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"Status": "Failure",
"message": "#{activity('WebActivityToAddPerson').output}",
"subject": "Failure in adding"
}
}
},
{
"name": "UserAddingNotification",
"type": "WebActivity",
"dependsOn": [
{
"activity": "WebActivityToAddPerson",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"url": "https://prod-07.australiaeast.logic.azure.com:443/workflows/xxxxxxxxxxxxxxxxxxxxx",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"Status": "Success",
"message": "#{activity('WebActivityToAddPerson').output}",
"subject": "User Added/Updated successfully"
}
}
}
]
}
},
{
"name": "SuccessSurveySentNotification",
"type": "WebActivity",
"dependsOn": [
{
"activity": "ForEachLead",
"dependencyConditions": [
"Succeeded"
]
}
],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"url": "https://prod-07.australiaeast.logic.azure.com:443/workflows/xxxxxxxxxxxxxxxxxxxxxx",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"Status": "Success",
"message": "#{item()}",
"subject": "Survey sent successfully"
}
}
}
],
"parameters": {
"leadList": {
"type": "Object"
}
}
}
}
So I found the answer. The item() was giving me null because the foreach parameter was not parsing correctly hence there was nothing in item().