How to get repository name using multiple checkout - azure-devops

Is there any variable (environment, system, resources) in the pipeline that hold the value for foo_repo and bar_repo ? I am looking for the path (just/code/foo, and just/code/bar) as I don't want to duplicate it in the config.
$(Build.Repository.Name) will return the repo name for self but what about the other repositories ?
resources:
repositories:
- repository: foo_repo
type: git
name: just/code/foo
- repository: bar_repo
type: git
name: just/code/bar
stages:
- checkout: foo_repo
- checkout: bar_repo
- checkout: self

When you check out multiple repositories, some details about the self repository are available as variables. When you use multi-repo triggers, some of those variables have information about the triggering repository instead. Details about all of the repositories consumed by the job are available as a template context object called resources.repositories.
For example, to get the ref of a non-self repository, you could write a pipeline like this:
resources:
repositories:
- repository: other
type: git
name: MyProject/OtherTools
variables:
tools.ref: $[ resources.repositories['other'].ref ]
steps:
- checkout: self
- checkout: other
- bash: |
echo "Tools version: $TOOLS_REF"
https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops#repository-details
The repositories context contains:
resources['repositories']['self'] =
{
"alias": "self",
"id": "<repo guid>",
"type": "Git",
"version": "<commit hash>",
"name": "<repo name>",
"project": "<project guid>",
"defaultBranch": "<default ref of repo, like 'refs/heads/main'>",
"ref": "<current pipeline ref, like 'refs/heads/topic'>",
"versionInfo": {
"author": "<author of tip commit>",
"message": "<commit message of tip commit>"
},
"checkoutOptions": {}
}

Related

Unable to use | character in AzureFunctionApp appSettings:

I am setting a load of appSettings in my AzureFunctionApp#1 deployment task - but whenever I try to put each on a new line using the | character I get the error:
##[error]Error: Failed to update App service '{{functionName}}' application settings. Error: BadRequest - Parameter name cannot be empty. (CODE: 400)
The output above that seems to show that it has indeed built the JSON with an empty parameter name. But I don't know why? I've tested with the values on separate lines, and still in a single line, so neither of these work:
appSettings: |
'-Values:Setting1 "$(SettingVal1)"
-Values:Setting2 "$(SettingVal2)"'
appSettings: |
'-Value:Setting1 "$(SettingVal1)" -Values:Setting2 "$(SettingVal2)"'
But this does:
appSettings: '-Value:Setting1 "$(SettingVal1)" -Values:Setting2 "$(SettingVal2)"'
I've also tried without the ' - but that made no difference either.
As per your feedback - Converting my comment as an answer, also tried locally in my system.
Multi-line json input works for setting the multiple values in the app settings as this is the closest way.
appSettings: |
[
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "$(Key)",
"slotSetting": false
},
{
"name": "MYSQL_DATABASE_NAME",
"value": "$(DB_Name)",
"slotSetting": false
}
]
Multiline JSON doesn't work with the AzureFunctionApp#1 task's appSettings parameter (for some reason).
If you try to use the multiline JSON appSettings with the AzureFunctionApp#1 task, you will get an error: BadRequest - Parameter name cannot be empty. (CODE: 400)
To use the multiline JSON appSettings, you need to use a separate AzureAppServiceSettings#1 task - as mentioned in this document.
I can confirm this works after the AzureFunctionApp#1 task. So my pipeline now has:
steps:
...
- task: AzureFunctionApp#1
displayName: Deploy the Function App
condition: succeeded()
inputs:
azureSubscription: "${{parameters.AppAzureSubscription}}"
appName: "${{parameters.functionAppName}}"
package: "$(Pipeline.Workspace)/drop/$(Build.BuildId).zip"
- task: AzureAppServiceSettings#1
displayName: Update app settings
inputs:
azureSubscription: "${{parameters.AppAzureSubscription}}"
appName: "${{parameters.functionAppName}}"
appSettings: |
[
{
"name": "Values:DbConnectionString",
"value": "$(DbConnectionString)",
"slotSetting": false
},
...
]

GitHub actions to trigger build on new Pull Requests

I have the following workflow to trigger CMake builds on my GitHub project:
name: C/C++ CI
on:
push:
branches: [ master, develop ]
pull_request:
types: [ opened, edited, reopened, review_requested ]
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-18.04
steps:
- name: Install deps
run: sudo apt-get update; sudo apt-get install python3-distutils libfastjson-dev libcurl4-gnutls-dev libssl-dev -y
- uses: actions/checkout#v2
- name: Run CMake
run: mkdir build; cd build; cmake .. -DCMAKE_INSTALL_PREFIX=/home/runner/work/access/build/ext_install;
- name: Run make
run: cd build; make -j8
I expected it to trigger builds on new Pull Requests and have the build status as a condition to approve the merging.
However I'm finding it a bit challenging to achieve such results. I'm sort of a newbie when it comes to GitHub Actions.
I'm able to accomplish your scenario with a combination of github actions and github protected branch settings.
You've got your github actions setup correctly to run on a Pull Request with a destination branch: master or develop.
Now you have to configure your repo to prevent merging a PR if the CI fails:
On your Github Repo, go to Settings => Branches => Add a rule => Set branch name pattern to master => Enable 'Require status checks to pass before merging' => Status checks found in the last week for this repository pick the CI build you want to enforce
Until I write this response there is no way to do that only using GitHub actions, but you can do that by writing an action, using javascript or other of languages supported by GitHub Actions.
import * as core from '#actions/core'
import * as github from '#actions/github'
import {getRequiredEnvironmentVariable} from "./utils";
type GitHubStatus = { context: string, description?: string, state: "error" | "failure" | "pending" | "success", target_url?: string }
function commitStatusFromConclusion(conclusion: CheckConclusion): GitHubStatus{
let status: GitHubStatus = {
context: "branch-guard",
description: "Checks are running...",
state: "pending",
};
if (conclusion.allCompleted) {
if (conclusion.failedCheck) {
status.state = "failure";
status.description = `${conclusion.failedCheck.appName} ${conclusion.failedCheck.conclusion}`;
status.target_url = conclusion.failedCheck.url
} else {
status.state = "success";
status.description = "All checks are passing!";
}
}
return status;
}
export async function setStatus(repositoryOwner: string, repositoryName: string, sha: string, status: GitHubStatus): Promise<number> {
let api = new github.GitHub(getRequiredEnvironmentVariable('GITHUB_TOKEN'));
let params = {
owner: repositoryOwner,
repo: repositoryName,
sha: sha,
};
let response = await api.repos.createStatus({...params, ...status});
return response.status
}
and after you create the action you only have to call the step inside your workflow:
on:
pull_request: # to update newly open PRs or when a PR is synced
check_suite: # to update all PRs upon a Check Suite completion
type: ['completed']
name: Branch Guard
jobs:
branch-guard:
name: Branch Guard
if: github.event.check_suite.head_branch == 'master' || github.event.pull_request.base.ref == 'master'
runs-on: ubuntu-latest
steps:
- uses: YOUR-REP/YOUR-ACTION#v0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
If you want more doc about create javascript actions
I got this example from:
Block PR merges when Checks for target branches are failing
Hope that it help you.

AWS CodePipeline GitHub webhook can not be registered with GitHub if repo is an organisation repository

When I set up the hook using the console it works, but when I try to do it using cloudformation it never works. It does not even work if I use the AWS CLI version:
aws codepipeline register-webhook-with-third-party --webhook-name AppPipelineWebhook-aOnbonyFrNZu
This is how my webhook looks like (output from "aws codepipeline list-webhooks"):
{
"webhooks": [
{
"definition": {
"name": "AppPipelineWebhook-aOnbonyFrNZu",
"targetPipeline": "ftp-proxy-cf",
"targetAction": "GitHubAction",
"filters": [
{
"jsonPath": "$.ref",
"matchEquals": "refs/heads/{Branch}"
}
],
"authentication": "GITHUB_HMAC",
"authenticationConfiguration": {
"SecretToken": "<REDACTED>"
}
},
"url": "https://eu-west-1.webhooks.aws/trigger?t=eyJ<ALSO REDACTED>F9&v=1",
"arn": "arn:aws:codepipeline:eu-west-1:<our account ID>:webhook:AppPipelineWebhook-aOnbonyFrNZu",
"tags": []
}
]
}
The error I get is:
An error occurred (ValidationException) when calling the RegisterWebhookWithThirdParty operation: Webhook could not be registered with GitHub. Error cause: Not found [StatusCode: 404, Body: {"message":"Not Found","documentation_url":"https://developer.github.com/v3/repos/hooks/#create-a-hook"}]
These are the two relevant sections from my cloudformation file:
Resources:
AppPipelineWebhook:
Type: AWS::CodePipeline::Webhook
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: '{{resolve:secretsmanager:my/secretpath/github:SecretString:token}}'
Filters:
- JsonPath: $.ref
MatchEquals: 'refs/heads/{Branch}'
TargetPipeline: !Ref CodePipeline
TargetAction: GitHubAction
TargetPipelineVersion: !GetAtt CodePipeline.Version
# RegisterWithThirdParty: true
CodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name:
Ref: PipelineName
RoleArn: !GetAtt CodePipelineServiceRole.Arn
Stages:
- Name: Source
Actions:
- Name: GitHubAction
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: SourceOutput
Configuration:
Owner: myorganisationnameongithub
Repo: ftp-proxy
Branch: master
OAuthToken: '{{resolve:secretsmanager:my/secretpath/github:SecretString:token}}'
PollForSourceChanges: false
It can poll changes all right. So if I manually order an execution of the GitHubAction stage from the AWS Console, the latest commits are downloaded. And if I set PollForSourceChanges: true, that kind of polling also works, but alas not the webhook workflow (because the hook can not be registered with GitHub)
The error is observed due to (2) possible causes:
The Personal Access Token (PAT) is not configured to have the following GitHub scopes: admin:repo_hook and admin:org_hook 1
You can verify these permissions under 'User' (Top RIght) > 'Settings' > 'Developer Settings' > 'Personal Access Tokens'
'Owner' and/or 'Repository' name are incorrect in the CloudFormation template:
For the Pipeline Configuration in CloudFormation, make sure 'GitHubOwner' is the 'Organization name' and repository name is just the repo name and does not have a "org/repo_name" in it, e.g. in your case:
Example:
Configuration:
Owner: !Ref GitHubOwner <========== Github org name
Repo: !Ref RepositoryName
Branch: !Ref BranchName
OAuthToken: !Ref GitHubOAuthToken <========== <Personal Access Token>

How to access CloudWatch Event data from triggered Fargate task?

I read the docs on how to Run an Amazon ECS Task When a File is Uploaded to an Amazon S3 Bucket. However, this document stops short of explaining how to get the bucket/key values from the triggering event from within the Fargate task code itself. How can that be done?
I am not sure if you still need the answer for this one. But I did something similar to what Steven1978 mentioned but only using CloudFormation.
The config you're looking for is the InputTransformer. Check this example for a YAML CloudFormation template for an Event Rule:
rEventRuleForFileUpload:
Type: AWS::Events::Rule
Properties:
Description: "EventRule"
State: "ENABLED"
EventPattern:
source:
- "aws.s3"
detail-type:
- 'AWS API Call via CloudTrail'
detail:
eventSource:
- s3.amazonaws.com
eventName:
- "PutObject"
- "CompleteMultipartUpload"
requestParameters:
bucketName: "{YOUR_BUCKET_NAME}"
Targets:
- Id: '{YOUR_ECS_CLUSTER_ID}'
Arn: !Sub "arn:aws:ecs:${AWS::Region}:${AWS::AccountId}:cluster/${NAME_OF_YOUR_CLUSTER_RESOURCE}"
RoleArn: !GetAtt {YOUR_ROLE}.Arn
EcsParameters:
TaskCount: 1
TaskDefinitionArn: !Ref {YOUR_TASK_DEFINITION}
LaunchType: FARGATE
{... WHATEVER CONFIG YOU MIGHT HAVE...}
InputTransformer:
InputPathsMap:
s3_bucket: "$.detail.requestParameters.bucketName"
s3_key: "$.detail.requestParameters.key"
InputTemplate: '{ "containerOverrides": [ { "name": "{THE_NAME_OF_YOUR_CONTAINER_DEFINITION}", "environment": [ { "name": "EVENT_BUCKET", "value": <s3_bucket> }, { "name": "EVENT_OBJECT_KEY", "value": <s3_key> }] } ] }'
With this approach, you'll be able to get the s3 bucket name (EVENT_BUCKET) and the s3 object key (EVENT_OBJECT_KEY) as environment variables inside your container.
The info isn't very clear, indeed, but here are some sources I used to finally get it working:
Container Override;
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerOverride.html
InputTransformer:
https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_InputTransformer.html#API_InputTransformer_Contents

Get Lambda Arn into Resources : Type: AWS::Lambda::Permission

I have the following in my serverless yml file :
lambdaQueueFirstInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: ServiceLambdaFunctionQualifiedArn
Action: ‘lambda:InvokeFunction’
Principal: sqs.amazonaws.com
and I have the following in the Outputs section :
Outputs:
ServiceLambdaFunctionQualifiedArn:
Value:
‘Fn::GetAtt’: [ lambdaQueueFirst, Arn ]
this comes back with a message:
Template error: instance of Fn::GetAtt references undefined resource lambdaQueueFirst
Am I missing something and if so, what? since it is very little in terms of help or examples…
Also, is there a better of getting the lambda arn into the permissions code? if so, what is it?
You can use the environment variables to construct the ARN value. In your case, you can define a variable in your provider section like below. You might need to modify a little bit according to your application.
service: serverless App2
provider:
name: aws
runtime: python3.6
region: ap-southeast-2
stage: dev
environment:
AWS_ACCOUNT: 1234567890 # use your own AWS ACCOUNT number here
# define the ARN of the function that you want to invoke
FUNCTION_ARN: "arn:aws:lambda:${self:provider.region}:${self:provider.environment.AWS_ACCOUNT}:function:${self:service}-${self:provider.stage}-lambdaQueueFirst"
Outputs:
ServiceLambdaFunctionQualifiedArn:
Value: "${self:provider.environment.FUNCTION_ARN}"
See this and serverless variables for aws for example.
you can do this:
resources:
Resources:
LoggingLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: { "Fn::GetAtt": ["LoghandlerLambdaFunction", "Arn" ] }
Action: lambda:InvokeFunction
Principal: { "Fn::Join" : ["", ["logs.", { "Ref" : "AWS::Region"}, ".amazonaws.com" ] ] }
reference:
https://github.com/andymac4182/serverless_example