I have the following workflow to trigger CMake builds on my GitHub project:
name: C/C++ CI
on:
push:
branches: [ master, develop ]
pull_request:
types: [ opened, edited, reopened, review_requested ]
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-18.04
steps:
- name: Install deps
run: sudo apt-get update; sudo apt-get install python3-distutils libfastjson-dev libcurl4-gnutls-dev libssl-dev -y
- uses: actions/checkout#v2
- name: Run CMake
run: mkdir build; cd build; cmake .. -DCMAKE_INSTALL_PREFIX=/home/runner/work/access/build/ext_install;
- name: Run make
run: cd build; make -j8
I expected it to trigger builds on new Pull Requests and have the build status as a condition to approve the merging.
However I'm finding it a bit challenging to achieve such results. I'm sort of a newbie when it comes to GitHub Actions.
I'm able to accomplish your scenario with a combination of github actions and github protected branch settings.
You've got your github actions setup correctly to run on a Pull Request with a destination branch: master or develop.
Now you have to configure your repo to prevent merging a PR if the CI fails:
On your Github Repo, go to Settings => Branches => Add a rule => Set branch name pattern to master => Enable 'Require status checks to pass before merging' => Status checks found in the last week for this repository pick the CI build you want to enforce
Until I write this response there is no way to do that only using GitHub actions, but you can do that by writing an action, using javascript or other of languages supported by GitHub Actions.
import * as core from '#actions/core'
import * as github from '#actions/github'
import {getRequiredEnvironmentVariable} from "./utils";
type GitHubStatus = { context: string, description?: string, state: "error" | "failure" | "pending" | "success", target_url?: string }
function commitStatusFromConclusion(conclusion: CheckConclusion): GitHubStatus{
let status: GitHubStatus = {
context: "branch-guard",
description: "Checks are running...",
state: "pending",
};
if (conclusion.allCompleted) {
if (conclusion.failedCheck) {
status.state = "failure";
status.description = `${conclusion.failedCheck.appName} ${conclusion.failedCheck.conclusion}`;
status.target_url = conclusion.failedCheck.url
} else {
status.state = "success";
status.description = "All checks are passing!";
}
}
return status;
}
export async function setStatus(repositoryOwner: string, repositoryName: string, sha: string, status: GitHubStatus): Promise<number> {
let api = new github.GitHub(getRequiredEnvironmentVariable('GITHUB_TOKEN'));
let params = {
owner: repositoryOwner,
repo: repositoryName,
sha: sha,
};
let response = await api.repos.createStatus({...params, ...status});
return response.status
}
and after you create the action you only have to call the step inside your workflow:
on:
pull_request: # to update newly open PRs or when a PR is synced
check_suite: # to update all PRs upon a Check Suite completion
type: ['completed']
name: Branch Guard
jobs:
branch-guard:
name: Branch Guard
if: github.event.check_suite.head_branch == 'master' || github.event.pull_request.base.ref == 'master'
runs-on: ubuntu-latest
steps:
- uses: YOUR-REP/YOUR-ACTION#v0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
If you want more doc about create javascript actions
I got this example from:
Block PR merges when Checks for target branches are failing
Hope that it help you.
Related
I am following HashiCorp's learning guide on how to set up GitHub Actions and terraform. All is running great besides the step to update the PR with the Terraform Plan.
I am hitting the following error:
An error occurred trying to start process '/home/runner/runners/2.287.1/externals/node12/bin/node' with working directory '/home/runner/work/ccoe-aws-ou-scp-manage/ccoe-aws-ou-scp-manage'. Argument list too long
The code I am using is:
- uses: actions/github-script#0.9.0
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`${process.env.PLAN}\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
A clear COPY/Paste from the docs: https://learn.hashicorp.com/tutorials/terraform/github-actions
I have tried with
actions/github-script version 5 and 6 and still the same problem, But when I copy paste the plan all is great. If I do not use the output variable and use some place holder text for the body all is working great. I can see that the step.plan.outputs.stdout is Ok if I print only that.
I will be happy to share more details if needed.
I also encountered a similar issue.
I seem github-script can't give to argument for too long script.
reference:
https://github.com/robburger/terraform-pr-commenter/issues/6#issuecomment-826966670
https://github.community/t/maximum-length-for-the-comment-body-in-issues-and-pr/148867
my answer:
- name: truncate terraform plan result
run: |
plan=$(cat <<'EOF'
${{ format('{0}{1}', steps.plan.outputs.stdout, steps.plan.outputs.stderr) }}
EOF
)
echo "${plan}" | grep -v 'Refreshing state' >> $GITHUB_ENV
echo "EOF" >> $GITHUB_ENV
- name: create comment from plan result
uses: actions/github-script#0.9.0
if: github.event_name == 'pull_request'
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${ process.env.PLAN }
\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ inputs.TF_WORK_DIR }}\`, Workflow: \`${{ github.workflow }}\`*`;
github.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})```
Based on #Zambozo's hint in the comments, this worked for me great:
- name: Terraform Plan
id: plan
run: terraform plan -no-color -input=false
- name: generate random delimiter
run: echo "DELIMITER=$(uuidgen)" >> $GITHUB_ENV
- name: truncate terraform plan result
run: |
echo "PLAN<<${{ env.DELIMITER }}" >> $GITHUB_ENV
echo '[maybe truncated]' >> $GITHUB_ENV
tail --bytes=10000 <<'${{ env.DELIMITER }}' >> $GITHUB_ENV
${{ format('{0}{1}', steps.plan.outputs.stderr, steps.plan.outputs.stdout) }}
${{ env.DELIMITER }}
echo >> $GITHUB_ENV
echo "${{ env.DELIMITER }}" >> $GITHUB_ENV
- name: post plan as sticky comment
uses: marocchino/sticky-pull-request-comment#v2
with:
header: plan
message: |
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
```
${{ env.PLAN }}
```
</details>
Notably GitHub does have an upper limit on comment size, so this only displays the last 10kB of the plan (showing the summary and warnings).
This also implements secure heredoc delimiting to avoid malicious output escaping.
Also note that the empty lines before and after the triplebacktics in the message are significant to avoid destroying the formatting.
Is there any variable (environment, system, resources) in the pipeline that hold the value for foo_repo and bar_repo ? I am looking for the path (just/code/foo, and just/code/bar) as I don't want to duplicate it in the config.
$(Build.Repository.Name) will return the repo name for self but what about the other repositories ?
resources:
repositories:
- repository: foo_repo
type: git
name: just/code/foo
- repository: bar_repo
type: git
name: just/code/bar
stages:
- checkout: foo_repo
- checkout: bar_repo
- checkout: self
When you check out multiple repositories, some details about the self repository are available as variables. When you use multi-repo triggers, some of those variables have information about the triggering repository instead. Details about all of the repositories consumed by the job are available as a template context object called resources.repositories.
For example, to get the ref of a non-self repository, you could write a pipeline like this:
resources:
repositories:
- repository: other
type: git
name: MyProject/OtherTools
variables:
tools.ref: $[ resources.repositories['other'].ref ]
steps:
- checkout: self
- checkout: other
- bash: |
echo "Tools version: $TOOLS_REF"
https://learn.microsoft.com/en-us/azure/devops/pipelines/repos/multi-repo-checkout?view=azure-devops#repository-details
The repositories context contains:
resources['repositories']['self'] =
{
"alias": "self",
"id": "<repo guid>",
"type": "Git",
"version": "<commit hash>",
"name": "<repo name>",
"project": "<project guid>",
"defaultBranch": "<default ref of repo, like 'refs/heads/main'>",
"ref": "<current pipeline ref, like 'refs/heads/topic'>",
"versionInfo": {
"author": "<author of tip commit>",
"message": "<commit message of tip commit>"
},
"checkoutOptions": {}
}
steps:
- task: Docker#2
displayName: Build and Push
inputs:
command: buildAndPush
containerRegistry: dockerRegistryServiceConnection1
repository: contosoRepository
tags: |
tag1
A convenience command called buildAndPush allows for build and push of images to container registry in a single command. See the above snippet
Question:
Do I need to log out from the container registry by adding following task?
- task: Docker#2
displayName: Logout of ACR
inputs:
command: logout
containerRegistry: dockerRegistryServiceConnection1
In my opinion this is not necessary to login or logout.
You may even find an example in documentation without login or logout:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build job
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
So you may wonder what actually login does. If you check a source code you will find that it actually set up DOCKER_CONFIG (The location of your client configuration files.)
export function run(connection: ContainerConnection): any {
var defer = Q.defer<any>();
connection.setDockerConfigEnvVariable();
defer.resolve(null);
return defer.promise;
}
and what logout does ;)
export function run(connection: ContainerConnection): any {
// logging out is being handled in connection.close() method, called after the command execution.
var defer = Q.defer<any>();
defer.resolve(null);
return <Q.Promise<any>>defer.promise;
}
So how does it work?
// Connect to any specified container registry
let connection = new ContainerConnection();
connection.open(null, registryAuthenticationToken, true, isLogout);
let dockerCommandMap = {
"buildandpush": "./dockerbuildandpush",
"build": "./dockerbuild",
"push": "./dockerpush",
"login": "./dockerlogin",
"logout": "./dockerlogout"
}
let telemetry = {
command: command,
jobId: tl.getVariable('SYSTEM_JOBID')
};
console.log("##vso[telemetry.publish area=%s;feature=%s]%s",
"TaskEndpointId",
"DockerV2",
JSON.stringify(telemetry));
/* tslint:disable:no-var-requires */
let commandImplementation = require("./dockercommand");
if (command in dockerCommandMap) {
commandImplementation = require(dockerCommandMap[command]);
}
let resultPaths = "";
commandImplementation.run(connection, (pathToResult) => {
resultPaths += pathToResult;
})
/* tslint:enable:no-var-requires */
.fin(function cleanup() {
if (command !== "login") {
connection.close(true, command);
}
})
Starting build command you will
connect to container registry
run command
close connection (if this is not a login command)
And this is what close connection does:
If registry info is present, remove auth for only that registry. (This can happen for any command - build, push, logout etc.)
Else, remove all auth data. (This would happen only in case of logout command. For other commands, logout is not called.)
Answering your question, you can live without login and logout command.
Ok, so I added cypress to aurelia during my configuration and it worked fine. When I went to set up cypress on github as just a command, I could not get it to recognize puppeteer as a browser. So instead I went and used the official github actions for cypress, and that works
- name: test
uses: cypress-io/github-action#v1
with:
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
however I had to set my cypress.json as follows
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "..."
}
and now running yarn e2e doesn't work because there's no server stood up, as it's not doing it itself anymore via cypress.config.js
const CLIOptions = require( 'aurelia-cli').CLIOptions;
const aureliaConfig = require('./aurelia_project/aurelia.json');
const PORT = CLIOptions.getFlagValue('port') || aureliaConfig.platform.port;
const HOST = CLIOptions.getFlagValue('host') || aureliaConfig.platform.host;
module.exports = {
config: {
baseUrl: `http://${HOST}:${PORT}`,
fixturesFolder: 'test/e2e/fixtures',
integrationFolder: 'test/e2e/integration',
pluginsFile: 'test/e2e/plugins/index.js',
screenshotsFolder: 'test/e2e/screenshots',
supportFile: 'test/e2e/support/index.js',
videosFolder: 'test/e2e/videos'
}
};
how can I make it so that yarn e2e works as it previously did, and have it working on github?(I don't care which side of the equation is changed)
here's yarn e2e not sure what the au is doing under the hood.
"e2e": "au cypress",
Easiest way to achieve this, create a test/e2e/cypress-config.json
{
"baseUrl": "http://localhost:8080",
"fixturesFolder": "test/e2e/fixtures",
"integrationFolder": "test/e2e/integration",
"pluginsFile": "test/e2e/plugins/index.js",
"screenshotsFolder": "test/e2e/screenshots",
"supportFile": "test/e2e/support/index.js",
"videosFolder": "test/e2e/videos",
"projectId": "1234"
}
, and then setup the github action like this.
- name: test
uses: cypress-io/github-action#v1
with:
config-file: tests/e2e/cypress-config.json
start: yarn start
browser: ${{matrix.browser}}
record: true
env:
# pass the Dashboard record key as an environment variable
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
the path doesn't matter, just that you configure the same one. Just make sure it doesn't overlap with what aurelia wants.
I'm trying to set a github status from a Jenkins job. Jenkins returns a
[Set GitHub commit status (universal)] SUCCESS on repos [] (sha:9892fbd) with context:ci/jenkins/tests
... but the status isn't set when I query it with the REST API later.
There's the groovy code:
def getCommitHash() {
sh(script: """
git rev-parse HEAD
""", returnStdout: true).trim()
}
def setCountTestLocation(String location) {
url = "https://<internal github>/<org>/<repo>"
commitHash = getCommitHash()
print(url)
print(commitHash)
step([
$class: "GitHubCommitStatusSetter",
reposSource: [$class: "ManuallyEnteredRepositorySource", url: url],
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: "ci/jenkins/tests"],
statusBackrefSource: [$class: "ManuallyEnteredBackrefSource", backref: location],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
commitShaSource: [$class: "ManuallyEnteredShaSource", sha: commitHash],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: "Tests here!", state: "SUCCESS", location: location]] ]
]);
}
You repository hasn't been updated as it seems that repos were not properly set.
Plugin still reports success as it properly completed its run, but repo list is empty as evident in your message SUCCESS on repos [].
This issue can occur if you have not set up a "GitHub Server" config under the global Jenkins configs:
Manage Jenkins -> Configure System -> GitHub
You can find more details on how to set up a server configuration under the "Automatic Mode" section of the GitHub Plugin documentation:
https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin#GitHubPlugin-AutomaticMode%28Jenkinsmanageshooksforjobsbyitself%29
After much pain with the same issue and plugin, here is a fix not for this particular plugin, but rather a workaround that does not require a plugin and still solves the issue, using curl. You can add the following to your pipeline:
post {
success {
withCredentials([usernamePassword(credentialsId: 'your_credentials_id', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh 'curl -X POST --user $USERNAME:$PASSWORD --data "{\\"state\\": \\"success\\"}" --url $GITHUB_API_URL/statuses/$GIT_COMMIT'
}
}
failure {
withCredentials([usernamePassword(credentialsId: 'your_credentials_id', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh 'curl -X POST --user $USERNAME:$PASSWORD --data "{\\"state\\": \\"failure\\"}" --url $GITHUB_API_URL/statuses/$GIT_COMMIT'
}
}
}
Where the GITHUB_API_URL is usually constructed like so, for example in the environment directive:
environment {
GITHUB_API_URL='https://api.github.com/repos/organization_name/repo_name'
}
The credentialsId can be created and obtained from Jenkins -> Credentials