I have an ADF pipeline, I want to send the output of my activity as an email attachment in the logic app.
I have a lookup activity followed by a For each activity and an Inside For each activity I have a web activity to call the logic app.
I want to send the output of the lookup activity as an email attachment to the logic app. I am not able to think about this integration part.
Create Logic app event trigger with HTTP and Outlook.
Inside HTTP request is received:
Copy HTTP POST URL
Request Body JSON Schema
{ "properties": { "dataFactoryName": { "type": "string" }, "message": { "type": "string" }, "pipelineName": { "type": "string" }, "receiver": { "type": "string" } }, "type": "object" }
POST Method
Send an Email
Connect Your outlook email .
Use HTTP POST URL as shown in step1
Create parameter name receiver
Add dynamic this content
s
{
"message" : "This is the row from lookup item #{item().customerID},#{item().gender},#{item().SeniorCitizen},#{item().Partner}.",
"dataFactoryName" : "#{pipeline().DataFactory}",
"pipelineName" : "#{pipeline().Pipeline}",
"receiver" : "#{pipeline().parameters.receiver}"
}
Pipeline successfully executed and got the output:
There is no direct or easy way to send email attachment from ADF.
But as a workaround first you will have to save the output of your lookup activity to a file and then follow the approach described in this video by a community volunteer where logic apps come into play to send the lookup activity output data file as an attachment. How To Send File as Attachment From Azure Data Factory - Azure Data Factory Tutorial 2021
In order to save the lookup output data to a file you can follow this approach: Get Output of lookup activity in a file
Related
I want to publish data into a Service Bus, from my Storage Account.
I already tried send a simple body and it works fine. But i dont know how should set a data set.
Web Activity Setting
When i run this activity into a pipeline, this send
{
"myMessage": "Sample",
"datasets": [{
"name": "MyDataset",
"properties": {
...
}
}],
"linkedServices": [{
"name": "MyStorageLinkedService1",
"properties": {
...
}
}]
}
and i want send data from the file in dataset. Anyone know how i should set web activity?
You can achieve that by using "Copy Activity".
Here is a quick demo that I made :
I used JsonPlaceHolder API , I want to modify the array and add a custom value by doing a PUT request.
check it out here : https://jsonplaceholder.typicode.com/guide/
please read carefully "Updating a resource"
Here is a Json that I want to modify , I added it as a Dataset in ADF.
The main idea is to set the Dataset as a source and the sink is a REST API method so we are sending the Dataset as an input to the POST request in Copy activity.
Copy activity:
Source:
Sink:
You can read more about it here:
https://learn.microsoft.com/en-us/azure/data-factory/connector-rest?tabs=data-factory#dataset-properties
Here is the output of the Copy Activity:
I have Azure Data Factory Pipeline and I would like to send notification to Slack in the end of pipeline. Notification body is formed from content of database data.
Best practice to integrate Azure Data Factory and Slack?
A) ADF Webbook(to slack) -> Slack
B) ADF Web -> Logic Apps Web+Webhook -> Slack
Below is one way that worked for me
Firstly I have taken 2 variables
ListOfFiles - Array
strListOfFiles - string
Here is the pipeline that Im using:
ForEach loop Activities:
I have made items in my settings to read the child items i.e..
#activity('Get List of Files').output.childitems
Then in the set variables I'm storing all the arrays of ListOfFiles inside strListOfFiles
And then in the Web Im using my Logic App URL making post method having { "ListOfFiles":#{variables('strListOfFiles')} } inside the body.
Logic App workflow
Im using the below JSON Schema inside the HTTP request
{
"properties": {
"ListOfFiles": {
"type": "array"
}
},
"type": "object"
}
We have a site where our agents enter in some data, and then that data is sent to a client, via a SendGrid dynamic template.
The email content includes a lot of calculations based on the data entered, so we want our agents to have the ability to preview the email and verify the content first before sending it to the client.
Is there a way to use the SendGrid API to send a request with our json object, but instead of sending the email to the client, receive the generated email body so that we can display it to the agent and let them review it first?
Answered my own question. API v3 has GET methods for Dynamic Transactional Templates and Template Versions.
API Call:
/templates/{template_id}/versions/{version_id}
using sendgrid-ruby:
sg = SendGrid::API.new(api_key: sendgrid_api_key)
sg.client.templates._(template_id).versions._(template_version_id).get
(Note: the template_version_id is the ID and not the Name of the template version)
The response body then includes a field called html_content which is the full rendered HTML of a dynamic template version with any handlebar templating.
You can make API call via postman as:
https://api.sendgrid.com/v3/templates/d-d44fdfsdfdsfd342342343
with Bearer token along with Sendgrid API key like:
Bearer SG.Fvsdfsdjfksdfsdfjsdkjfsdfksjdfsdfksjdfkjsdkfjsdf
The response is:
{
"id": "d-d55d081558a641b48a8a1145b4549fbe",
"name": "Bt_Payment_Reminder (Active)",
"generation": "dynamic",
"updated_at": "2021-12-21 07:35:12",
"versions": [
{
"id": "a95c3652-e49f-4608-a9dd-5aa4831c2dc3",
"user_id": 11702857,
"template_id": "d-d55d081558a641b48a8a1145b4549fbe",
"active": 1,
"name": "Bt_Payment_Reminder_Updated",
"html_content": "Hello {{firstName}}",
"plain_content": "Hello {{firstName}}",
"generate_plain_content": true,
"subject": "{{subject}}",
"updated_at": "2021-12-21 07:37:48",
"editor": "code",
"test_data": "{\n \"firstName\":\"Virendra\"}",
"thumbnail_url": "sdasdasdasdasdasdsd"
}
]
}
I am trying to use https://github.com/ibm-watson-iot/openwhisk-package-watsoniotp in an OpenWhisk sequence (containing two actions) all code is node.js
Testing the sequence using Postman. Once the action completes, the action returns the variable, payload. The variable payload is passed over to the next action in the sequence which is the openwhisk-package-watsoniotp (added via a binding in the IBM Cloud console so I am unable to modify this code, it is locked).
I can post data from postman into Watson IoT platform via the sequence. However the format of the payload is interpreted as a String, not a JSON string.
This is the body I post from Postman, one of the variants I have tried.
{"payload": "{'speed': 10}"}
My node.JS actions return the input, unmodified.
return {payload: params.payload};
The value should be a JSON string. However WIOTP is unable to interpret the payload and basically tokenizes the values. This is evident when I try to create a board and a card. The property list lets me select each value in the array.
enter image description here
The openwhisk-package-watsontiotp code as far as I can tell just takes, params.payload as is and passes it along.
I found an example in the code that answer the question,
The payload, should be nested. I missed that originally.
{
"key": "sampleInput",
"value": {
"eventType": "status",
"payload": {
"temp": 4
},
"domain": "messaging.internetofthings.ibmcloud.com",
"typeId": "xxxx",
"deviceId": "xxxx01"
}
}
I have a node app that uses the Watson Conversation service. I am able to successfully trigger a call to another API via a dialogue node using the JSON that it uses for the reply. However after reading up it seems I am doing it wrong. I am triggering my client server to make a REST call by adding an action property to the context.
{
"context": {
"action": "lookup"
},
"output": {}
}
When I get my result I add it onto the context object and pass it back to the conversation service. This seems to work ok, but it causes some issues.
1) having to manually delete these props after I trigger the thing I want
2) In conversation I must wait for user input even though I am not actually requesting user input on the front end but rather my client app is sending a message with no input text and the results of the REST call on the context object. This message which is returned to the conversation at the node where the action was triggered is what triggers the child nodes. It seems like there is a standardized way IBM wants you to make these programmatic calls regardless of if it's to an IBM cloud function, or your own client app. https://console.bluemix.net/docs/services/conversation/dialog-actions.html#dialog-actions
docs method:
{
"context": {
"variable_name" : "variable_value"
},
"actions": [
{
"name":"<actionName>",
"type":"client | server",
"parameters": {
"<parameter_name>":"<parameter_value>",
"<parameter_name>":"<parameter_value>"
},
"result_variable": "<result_variable_name>",
"credentials": "<reference_to_credentials>"
}
],
"output": {
"text": "response text"
}
}
Is this a new feature? I was referencing sample projects for my own app and I didn't see this pattern. By using this format will it tell the parent node to wait for a response to come back before trying to process the children? Will it prevent me from needing to delete properties off the context object so that I'm not calling the same action over and over with the same parameters in further turns of the conversation?