Best practice to integrate Azure Data Factory and Slack? - azure-data-factory

I have Azure Data Factory Pipeline and I would like to send notification to Slack in the end of pipeline. Notification body is formed from content of database data.
Best practice to integrate Azure Data Factory and Slack?
A) ADF Webbook(to slack) -> Slack
B) ADF Web -> Logic Apps Web+Webhook -> Slack

Below is one way that worked for me
Firstly I have taken 2 variables
ListOfFiles - Array
strListOfFiles - string
Here is the pipeline that Im using:
ForEach loop Activities:
I have made items in my settings to read the child items i.e..
#activity('Get List of Files').output.childitems
Then in the set variables I'm storing all the arrays of ListOfFiles inside strListOfFiles
And then in the Web Im using my Logic App URL making post method having { "ListOfFiles":#{variables('strListOfFiles')} } inside the body.
Logic App workflow
Im using the below JSON Schema inside the HTTP request
{
"properties": {
"ListOfFiles": {
"type": "array"
}
},
"type": "object"
}

Related

How to send activity output as email attachment in logic app

I have an ADF pipeline, I want to send the output of my activity as an email attachment in the logic app.
I have a lookup activity followed by a For each activity and an Inside For each activity I have a web activity to call the logic app.
I want to send the output of the lookup activity as an email attachment to the logic app. I am not able to think about this integration part.
Create Logic app event trigger with HTTP and Outlook.
Inside HTTP request is received:
Copy HTTP POST URL
Request Body JSON Schema
{ "properties": { "dataFactoryName": { "type": "string" }, "message": { "type": "string" }, "pipelineName": { "type": "string" }, "receiver": { "type": "string" } }, "type": "object" }
POST Method
Send an Email
Connect Your outlook email .
Use HTTP POST URL as shown in step1
Create parameter name receiver
Add dynamic this content
s
{
"message" : "This is the row from lookup item #{item().customerID},#{item().gender},#{item().SeniorCitizen},#{item().Partner}.",
"dataFactoryName" : "#{pipeline().DataFactory}",
"pipelineName" : "#{pipeline().Pipeline}",
"receiver" : "#{pipeline().parameters.receiver}"
}
Pipeline successfully executed and got the output:
There is no direct or easy way to send email attachment from ADF.
But as a workaround first you will have to save the output of your lookup activity to a file and then follow the approach described in this video by a community volunteer where logic apps come into play to send the lookup activity output data file as an attachment. How To Send File as Attachment From Azure Data Factory - Azure Data Factory Tutorial 2021
In order to save the lookup output data to a file you can follow this approach: Get Output of lookup activity in a file

How to send dataset in Web Activity? ADF

I want to publish data into a Service Bus, from my Storage Account.
I already tried send a simple body and it works fine. But i dont know how should set a data set.
Web Activity Setting
When i run this activity into a pipeline, this send
{
"myMessage": "Sample",
"datasets": [{
"name": "MyDataset",
"properties": {
...
}
}],
"linkedServices": [{
"name": "MyStorageLinkedService1",
"properties": {
...
}
}]
}
and i want send data from the file in dataset. Anyone know how i should set web activity?
You can achieve that by using "Copy Activity".
Here is a quick demo that I made :
I used JsonPlaceHolder API , I want to modify the array and add a custom value by doing a PUT request.
check it out here : https://jsonplaceholder.typicode.com/guide/
please read carefully "Updating a resource"
Here is a Json that I want to modify , I added it as a Dataset in ADF.
The main idea is to set the Dataset as a source and the sink is a REST API method so we are sending the Dataset as an input to the POST request in Copy activity.
Copy activity:
Source:
Sink:
You can read more about it here:
https://learn.microsoft.com/en-us/azure/data-factory/connector-rest?tabs=data-factory#dataset-properties
Here is the output of the Copy Activity:

Azure DevOps - Unable to Create Var Group using Azure DevOps API and Auth Token

Requirements: We would like to create a Variable Group (along with some variables) in a given Project.
Option1: We are able to create a new Variable Group successfully
when we create a request via PostMan using PAT Token which has FULL access.
Option2: Our end goal is to invoke the ADO Rest API in the Web App which uses
OAuth. When the end user logs in and make a call (pls see the input
details below) we are getting '401 Un Authorized - The user is not authorized to access this resource.' error. The Web App's application has the Variable Groups manage scope as shown below.
TroubleShooting: As part of troubleshooting, for Option1 which uses PAT (with full access) in Postman, we have updated the permissions of the PAT to just have Create, Read and Manage Var Groups as shown below.
Now, even the Option1 is not working after making the PAT to have Custom Defined access.
Are we missing something?
Postman Details:
URL: https://dev.azure.com/myorgname/_apis/distributedtask/variablegroups?api-version=6.0-preview.2
Verb: Post
Headers: Authorization: Basic
Body:
{
"name": "This is ignored",
"description": "This is ignored",
"type": "Vsts",
"variables": {
"BuildConfiguration": {
"value": "Release"
}
},
"variableGroupProjectReferences": [
{
"name": "VarGroup",
"description": "The variable group to store the information about the variables using in the Pipeline",
"projectReference": {
"id": "#ProjectId#",
"name": "#ProjectName#"
}
}
]
}
I can also reproduce your issue with option 1, not only Read, create, & manage for Variable Groups, even I select all the scopes via Custom defined, it still does not work.
According to this doc - https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/manage-pats-with-policies-for-administrators?view=azure-devops#restrict-creation-of-full-scoped-pats
Some of our public APIs are currently unassociated with a PAT scope, and can therefore only be used with “full-scoped” PATs. Because of this, restricting the creation of full-scoped PATs might block some workflows. We're working to identify and document the affected APIs and eventually associate them with the appropriate scope. For now, these workflows can be unblocked by using the allow list.
I believe this should be the reason for this issue, there may be some additional permissions to create variable groups. For option 2, there may be a similar cause.
So in this case, you may need to use the Full access PAT temporarily, as mentioned in the doc We're working to identify and document the affected APIs and eventually associate them with the appropriate scope.

How to trigger azure pipeline via API in a way it does not report it was manually triggered

We have an Azure pipeline building a static site. When there is a change in a content repository the site needs to be rebuilt. For that, we're using webhooks and Azure DevOps API. The request to queue the build is very simple and is illustrated for example here.
What I don't like about this is that int the build listing it says "Manually triggered for person XY", where the person XY is the one who generated the credentials used in the API request. It seems quite confusing because any API request seems strange to be labeled as "manually requested". What would be the best way how to achieve more semantically correct message?
I've found there is a reason property which can be sent in the request. But none of the values seems to represent what I want and some of them do not work (probably need additional properties and there is no documentation for that).
Based on my test, when you use the Rest API to queue a build and set the build reason, the reason could be shown in the Build(except:batchedCI and buildCompletion).
Here is the Rest API example:
Post https://dev.azure.com/Organization/Project/_apis/build/builds?api-version=4.1
Request Body:
{
"definition": {
"id": 372
},
"reason":"pullRequest"
}
The value : checkInShelveset individualCI pullRequest schedule could show their own names.
The value: manual and none could show manually trigger.
The other value(e.g. All, userCreated) will show Other Build Reason.
For the value: batchedCI and buildCompletion.
BatchedCI: Continuous integration (CI) triggered by a Git push or a TFVC check-in, and the Batch changes was selected.
This means that batch changes are required to achieve this trigger. So it doesn't support to queue build in Rest API .
buildCompletion: you could refer to this ticket This reason doesn't support in Rest API-queue Build.
Note: If you enter a custom value or misspelling, it will always display manual trigger.
In the end, I went with all value and also overriding the person via requestedFor property. This leads to the message "Other build reason", which seems usable to me.
{
"definition": {
"id": 17
},
"reason":"all",
"requestedFor": {
"id": "4f9ff423-0e0d-4bfb-9f6b-e76d2e9cd3ae"
}
}
However, I'm not sure if there aren't any unwanted consequences of this "All reasons" value.

How can I add a subscription in Appsync which is invoked by a lambda?

I know that subscription in Appsync works with mutation, which means whenever mutation gets invoked, subscription is also invoked which sends the message to all the subscribers.
What I want is that if there is any way, I can basically sends a message to the user directly from lambda using Appsync or any other way in real time? That is, I don't want user to refresh the page.
The use case can be, say, I have a standalone lambda which runs every hour and wants to notify users every hour about something. It is not part of any mutation or query.
You can attach a None DataSource to a mutation and send the information required for the subscription to be triggered within the mutation arguments.
for example let's assume you have the following schema
type Book {
bookId: Int
}
input BookInput {
bookId: Int
}
type mutation {
triggerBookUpdate(input: BookInput!): Book
}
type subscription {
onBookUpdate(bookId: Int!): Book
#aws_subscribe(mutations: ["triggerBookUpdate"])
}
then you attach the None DataSource to the resolver for the TriggerBookUpdate field and provide the following Request Mapping Template
#**
Resolvers with None data sources can locally publish events that fire
subscriptions or otherwise transform data without hitting a backend data source.
The value of 'payload' is forwarded to $ctx.result in the response mapping template.
*#
{
"version": "2017-02-28",
"payload": $utils.toJson($context.arguments.input)
}
and the Response Mapping Template
$util.toJson($ctx.result)
for more information check this Documentation