How to trigger build for changes in a subdirectory of a Git repo in buildbot - buildbot

Say you have a repo with this structure:
myrepo/project1
myrepo/project2
How do you configure buildbot so it only triggers build when there's update in myrepo/project1?
Following is sample config I have that triggers on the whole repo:
step_build = steps.ShellCommand(name='somebuildcommand',
command=['some', 'build', 'command'],
workdir="build/",
description='some build command')
factory = util.BuildFactory()
# check out the source
factory.addStep(steps.Git(repourl='https://github.com/some/myrepo.git', mode='incremental'))
factory.addStep(step_build)
c['builders'] = []
c['builders'].append(
util.BuilderConfig(name="runtests",
workernames=["example-worker"],
factory=factory))

Ok, figured this out myself, basically needed to configure scheduler and only trigger on "important" files, example below:
def file_is_important(change):
if not change.files:
return False
for file in change.files:
if file.startswith('important-dir/'):
print 'detected important changes:', change.files
return True
return False
c['schedulers'] = []
c['schedulers'].append(schedulers.SingleBranchScheduler(
name="all",
fileIsImportant=file_is_important,
change_filter=util.ChangeFilter(branch='master'),
treeStableTimer=None,
builderNames=["builder"]))

Related

Rundeck stop running steps based on global variable

I have a Rundeck job that executes multiple steps, each of which are Job References to other small jobs. The first step selects a server to upgrade, and sets a global variable with the server name. The remaining steps perform upgrade tasks. It is possible though for the first step to return NONE as the server name, and if that's the case I would like to halt execution right there without running the remaining steps, and I'd like the whole job to be marked as Successful.
I could just make that first job exit with an error code, but then the whole job looks failed, and it looks like there is something wrong with it, even though it successfully ran and found there was nothing to upgrade.
Any ideas? I'm finding "use a flow control step" everywhere, but I can't see how to make that work for my use case.
The best way to create complex workflows depending on some output value is to use the Ruleset Strategy (Rundeck Enterprise). Take a look at this.
On the community version you can save the result of the first step on a key-value variable and do some "script-fu" in the following steps:
Step 1: print the status and save it on a data variable using the key-value data log filter.
Steps 2,3,4: capture the key-value data and then the step can continue or not.
I made an example easy to import to your instance for testing:
- defaultTab: nodes
description: ''
executionEnabled: true
id: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
loglevel: INFO
name: HelloWorld
nodeFilterEditable: false
options:
- enforced: true
name: opt1
required: true
value: 'true'
values:
- 'true'
- 'false'
valuesListDelimiter: ','
plugins:
ExecutionLifecycle: null
scheduleEnabled: true
sequence:
commands:
- exec: echo "url=${option.opt1}"
plugins:
LogFilter:
- config:
invalidKeyPattern: \s|\$|\{|\}|\\
logData: 'true'
name: result
regex: .*=\s*(.+)$
type: key-value-data
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step two"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step three"
fi
scriptInterpreter: /bin/bash
- fileExtension: .sh
interpreterArgsQuoted: false
script: |-
# data/value evaluation
if [ "#data.result#" = "true" ]; then
echo "step four"
fi
scriptInterpreter: /bin/bash
keepgoing: false
strategy: node-first
uuid: 27de501a-8bb2-4c6e-a5f9-0676e80ca75a
MegaDrive68k's answer is what you can do best with the basic opensource version or if you have the Enterprise version.
But you can also create your own plugin or make a fork out of an existing one.
Which I did with the official flow control puglin and add conditions.
You can fork this plugin and add in the java code 2 new #PluginProperty (That add two new field in a plugin parameter in rundeck interface) and make a comparison of values.
Example:
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
Comparison of Strings values (in your case it is)
if (value1.equals(value2)) {...}
Comparison of Numeric values
if (value1 == value2) {...}
If you want to stop the job with successful (it does not stop the parent job, just actual):
context.getFlowControl().Halt(true);
If you want to stop the job with a failed status:
context.getFlowControl().Halt(false);
If you want to stop the job with a customized status:
context.getFlowControl().Halt("MY CUSTOM STATUS");
And finally, if you want to continue and not stop:
context.getFlowControl().Continue();
So a complete example (add this to your public class):
#PluginProperty(title = "First Value", description = "Compare this", required = true)
String value1;
#PluginProperty(title = "Second Value", description = "To this", required = true)
String value2;
#Override
public void executeStep(final PluginStepContext context, final Map<String, Object> configuration)
throws StepException
{
if (value1.equals(value2)) {
//Halt actual JOB without failed
context.getFlowControl().Halt(true);
} else {
//Continue
context.getFlowControl().Continue();
}
}
Then create your jar file and place it in the libext folder.
Now you can add your custom step. Put your global var in the first field and "NONE" in the second field.
If global var contain "NONE" the job stop successful at this step.
If you call a job with this step from oterh job (parent), the parent job continue.
If you want you can use this fork plugin which already includes these modifications. Look like this

terraform plan recreates resources on every run with terraform cloud backend

I am running into an issue where terraform plan recreates resources that don't need to be recreated every run. This is an issue because some of the steps depend on those resources being available, and since they are recreated with each run, the script fails to complete.
My setup is Github Actions, Linode LKE, Terraform Cloud.
My main.tf file looks like this:
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
provider "linode" {
}
provider "helm" {
debug = true
kubernetes {
config_path = "${local_file.kubeconfig.filename}"
}
}
resource "linode_lke_cluster" "lke_cluster" {
label = "MY-LABEL-HERE"
k8s_version = "1.21"
region = "us-central"
pool {
type = "g6-standard-2"
count = 3
}
}
and my outputs.tf file
resource "local_file" "kubeconfig" {
depends_on = [linode_lke_cluster.lke_cluster]
filename = "kube-config"
# filename = "${path.cwd}/kubeconfig"
content = base64decode(linode_lke_cluster.lke_cluster.kubeconfig)
}
resource "helm_release" "ingress-nginx" {
# depends_on = [local_file.kubeconfig]
depends_on = [linode_lke_cluster.lke_cluster, local_file.kubeconfig]
name = "ingress"
repository = "https://kubernetes.github.io/ingress-nginx"
chart = "ingress-nginx"
}
resource "null_resource" "custom" {
depends_on = [helm_release.ingress-nginx]
# change trigger to run every time
triggers = {
build_number = "${timestamp()}"
}
# download kubectl
provisioner "local-exec" {
command = "curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl"
}
# apply changes
provisioner "local-exec" {
command = "./kubectl apply -f ./k8s/ --kubeconfig ${local_file.kubeconfig.filename}"
}
}
In Github Actions, I'm running these steps:
jobs:
init-terraform:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./terraform
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: List terraform state
run: terraform state list
- name: Terraform Plan
run: terraform plan
id: plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
When I look at the results of terraform state list I can see my resources:
Run terraform state list
terraform state list
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin state list
helm_release.ingress-nginx
linode_lke_cluster.lke_cluster
local_file.kubeconfig
null_resource.custom
But my terraform plan fails and the issue seems to stem from the fact that those resources try to get recreated.
Run terraform plan
terraform plan
shell: /usr/bin/bash -e {0}
env:
TERRAFORM_CLI_PATH: /home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321
LINODE_TOKEN: ***
/home/runner/work/_temp/3f9749b8-515b-4cb4-8053-1a6318496321/terraform-bin plan
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.
Preparing the remote plan...
Waiting for the plan to start...
Terraform v1.0.2
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
linode_lke_cluster.lke_cluster: Refreshing state... [id=31946]
local_file.kubeconfig: Refreshing state... [id=fbb5520298c7c824a8069397ef179e1bc971adde]
helm_release.ingress-nginx: Refreshing state... [id=ingress]
╷
│ Error: Kubernetes cluster unreachable: stat kube-config: no such file or directory
│
│ with helm_release.ingress-nginx,
│ on outputs.tf line 8, in resource "helm_release" "ingress-nginx":
│ 8: resource "helm_release" "ingress-nginx" {
Is there a way to tell terraform it doesn't need to recreate those resources?
Regarding the actual error shown, Error: Kubernetes cluster unreachable: stat kibe-config: no such file or directory... which is referencing your outputs file... I found this which could help with your specific error: https://github.com/hashicorp/terraform-provider-helm/issues/418
1 other thing looks strange to me. Why does your outputs.tf refer to 'resources' & not 'outputs'. Shouldn't your outputs.tf look like this?
output "local_file_kubeconfig" {
value = "reference.to.resource"
}
Also I see your state file / backend config looks like it's properly configured.
I recommend logging into your terraform cloud account to verify that the workspace is indeed there, as expected. It's the state file that tells terraform not to re-create the resources it manages.
If the resources are already there and terraform is trying to re-create them, that could indicate that those resources were created prior to using terraform or possibly within another terraform cloud workspace or plan.
Did you end up renaming your backend workspace at any point with this plan? I'm referring to your main.tf file, this part where it says MY-WORKSPACE-HERE :
terraform {
required_providers {
linode = {
source = "linode/linode"
version = "=1.16.0"
}
helm = {
source = "hashicorp/helm"
version = "=2.1.0"
}
}
backend "remote" {
hostname = "app.terraform.io"
organization = "MY-ORG-HERE"
workspaces {
name = "MY-WORKSPACE-HERE"
}
}
}
Unfortunately I am not a kurbenetes expert, so possibly more help can be used there.

GitHub actions to trigger build on new Pull Requests

I have the following workflow to trigger CMake builds on my GitHub project:
name: C/C++ CI
on:
push:
branches: [ master, develop ]
pull_request:
types: [ opened, edited, reopened, review_requested ]
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-18.04
steps:
- name: Install deps
run: sudo apt-get update; sudo apt-get install python3-distutils libfastjson-dev libcurl4-gnutls-dev libssl-dev -y
- uses: actions/checkout#v2
- name: Run CMake
run: mkdir build; cd build; cmake .. -DCMAKE_INSTALL_PREFIX=/home/runner/work/access/build/ext_install;
- name: Run make
run: cd build; make -j8
I expected it to trigger builds on new Pull Requests and have the build status as a condition to approve the merging.
However I'm finding it a bit challenging to achieve such results. I'm sort of a newbie when it comes to GitHub Actions.
I'm able to accomplish your scenario with a combination of github actions and github protected branch settings.
You've got your github actions setup correctly to run on a Pull Request with a destination branch: master or develop.
Now you have to configure your repo to prevent merging a PR if the CI fails:
On your Github Repo, go to Settings => Branches => Add a rule => Set branch name pattern to master => Enable 'Require status checks to pass before merging' => Status checks found in the last week for this repository pick the CI build you want to enforce
Until I write this response there is no way to do that only using GitHub actions, but you can do that by writing an action, using javascript or other of languages supported by GitHub Actions.
import * as core from '#actions/core'
import * as github from '#actions/github'
import {getRequiredEnvironmentVariable} from "./utils";
type GitHubStatus = { context: string, description?: string, state: "error" | "failure" | "pending" | "success", target_url?: string }
function commitStatusFromConclusion(conclusion: CheckConclusion): GitHubStatus{
let status: GitHubStatus = {
context: "branch-guard",
description: "Checks are running...",
state: "pending",
};
if (conclusion.allCompleted) {
if (conclusion.failedCheck) {
status.state = "failure";
status.description = `${conclusion.failedCheck.appName} ${conclusion.failedCheck.conclusion}`;
status.target_url = conclusion.failedCheck.url
} else {
status.state = "success";
status.description = "All checks are passing!";
}
}
return status;
}
export async function setStatus(repositoryOwner: string, repositoryName: string, sha: string, status: GitHubStatus): Promise<number> {
let api = new github.GitHub(getRequiredEnvironmentVariable('GITHUB_TOKEN'));
let params = {
owner: repositoryOwner,
repo: repositoryName,
sha: sha,
};
let response = await api.repos.createStatus({...params, ...status});
return response.status
}
and after you create the action you only have to call the step inside your workflow:
on:
pull_request: # to update newly open PRs or when a PR is synced
check_suite: # to update all PRs upon a Check Suite completion
type: ['completed']
name: Branch Guard
jobs:
branch-guard:
name: Branch Guard
if: github.event.check_suite.head_branch == 'master' || github.event.pull_request.base.ref == 'master'
runs-on: ubuntu-latest
steps:
- uses: YOUR-REP/YOUR-ACTION#v0.1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
If you want more doc about create javascript actions
I got this example from:
Block PR merges when Checks for target branches are failing
Hope that it help you.

Jenkins Github plugin doesn't set status

I'm trying to set a github status from a Jenkins job. Jenkins returns a
[Set GitHub commit status (universal)] SUCCESS on repos [] (sha:9892fbd) with context:ci/jenkins/tests
... but the status isn't set when I query it with the REST API later.
There's the groovy code:
def getCommitHash() {
sh(script: """
git rev-parse HEAD
""", returnStdout: true).trim()
}
def setCountTestLocation(String location) {
url = "https://<internal github>/<org>/<repo>"
commitHash = getCommitHash()
print(url)
print(commitHash)
step([
$class: "GitHubCommitStatusSetter",
reposSource: [$class: "ManuallyEnteredRepositorySource", url: url],
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: "ci/jenkins/tests"],
statusBackrefSource: [$class: "ManuallyEnteredBackrefSource", backref: location],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
commitShaSource: [$class: "ManuallyEnteredShaSource", sha: commitHash],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: "Tests here!", state: "SUCCESS", location: location]] ]
]);
}
You repository hasn't been updated as it seems that repos were not properly set.
Plugin still reports success as it properly completed its run, but repo list is empty as evident in your message SUCCESS on repos [].
This issue can occur if you have not set up a "GitHub Server" config under the global Jenkins configs:
Manage Jenkins -> Configure System -> GitHub
You can find more details on how to set up a server configuration under the "Automatic Mode" section of the GitHub Plugin documentation:
https://wiki.jenkins-ci.org/display/JENKINS/GitHub+Plugin#GitHubPlugin-AutomaticMode%28Jenkinsmanageshooksforjobsbyitself%29
After much pain with the same issue and plugin, here is a fix not for this particular plugin, but rather a workaround that does not require a plugin and still solves the issue, using curl. You can add the following to your pipeline:
post {
success {
withCredentials([usernamePassword(credentialsId: 'your_credentials_id', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh 'curl -X POST --user $USERNAME:$PASSWORD --data "{\\"state\\": \\"success\\"}" --url $GITHUB_API_URL/statuses/$GIT_COMMIT'
}
}
failure {
withCredentials([usernamePassword(credentialsId: 'your_credentials_id', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
sh 'curl -X POST --user $USERNAME:$PASSWORD --data "{\\"state\\": \\"failure\\"}" --url $GITHUB_API_URL/statuses/$GIT_COMMIT'
}
}
}
Where the GITHUB_API_URL is usually constructed like so, for example in the environment directive:
environment {
GITHUB_API_URL='https://api.github.com/repos/organization_name/repo_name'
}
The credentialsId can be created and obtained from Jenkins -> Credentials

How to run continuous integration in parallel across multiple Pull Requests?

I am testing use of Jenkins with Github pull request builder plugin I have successfully set up a toy project on Github and dev installation of Jenkins so that raising a PR, or pushing changes to a PR branch triggers a build. Mostly this works as required - a few things don't match preferred workflow, but the freedom from having to write and maintain our own plugin is a big deal.
I have one potential showstopper. The plugin queues up all pushes in all PRs it sees, and only ever seems to run a single job at a time, even with spare executors available. In the real world project, we may have 10 active PRs, each may get a few pushed updates in a day in response to QC comments, and the full CI run takes > 30 mins. However, we do have enough build executors provisioned to run multiple jobs at the same time.
I cannot see any way to configure the PR request builder to process multiple jobs at once on the same trigger, but I may be missing something basic elsewhere in Jenkins. Is there a way to do this, without needing to customise the plugin?
I have installed Jenkins ver. 1.649 on a new Ubuntu 14.04 server (on a VirtualBox guest) and followed the README in the ghprb plugin (currently version 1.30.5), including setting up a jenkins "bot" account on Github as a collaborator to run all the integration API calls to Github.
I was wondering what the behaviour would be if I cloned the job (create new item and "Copy existing item"), and may try that next, but I expect that will result in the same job being run multiple times for no benefit as opposed to interacting smartly with other jobs polling the same pool of PRs.
I have found the config setting whilst exploring more for the question.
It is really easy when you know which config item it is, but Jenkins has a lot of configuration to work through, especially when you are exploring the plugins.
The key thing is that the option to serve queued jobs in parallel (available executors allowing) is core Jenkins config, and not part of the Github PR builder.
So, just check the option Execute concurrent builds if necessary. This option should be found at the bottom of the first, untitled section of config. It is a really basic Jenkins option, that a newbie like me missed due to the mountain of other options.
May be it is too late to answer this question, but after few days of researching I figured out a way to create multiple jobs per PR in github.
The code I am showing here applies to github enterprise, but it works well enough for the general github(bitbucket) as well with a few tweaks in url and git command.
The mainline repository against which the PRs are created needs to have a file, I call it PRJob.groovy and contains
import groovy.json.JsonSlurper
gitUrl = GIT_URL
repoRestUrl = "${GITHUB_WEB_URL}/repos/${project}/${repo}"
def getJSON(url) {
def conn = (HttpURLConnection) new URL(url).openConnection()
conn.setRequestProperty("Authorization", "token ${OAUTH_TOKEN}");
return new JsonSlurper().parse(new InputStreamReader(conn.getInputStream()))
}
def createPipeline(name, description, branch, prId) {
return pipelineJob(name) {
delegate.description description
if (ENABLE_TRIGGERS == 'true') {
triggers {
cron 'H H/8 * * *'
scm 'H/5 * * * *'
}
}
quietPeriod(60)
environmentVariables {
env 'BRANCH_NAME', branch
env 'PULL_REQUEST', prId
env 'GITHUB_WEB_URL', GITHUB_WEB_URL
env 'OAUTH_TOKEN', OAUTH_TOKEN
env 'PROJECT', project
env 'REPO', repo
}
definition {
cpsScm {
scriptPath "Jenkinsfile"
scm {
git {
remote {
credentials "jenkins-ssh-key"
delegate.url gitUrl
if (prId != "") {
refspec "+refs/pull/${prId}/*:refs/remotes/origin/pr/${prId}/*"
}
}
delegate.branch branch
}
}
}
}
}
}
def createPRJobs() {
def prs = getJSON("${repoRestUrl}/pulls?state=open")
if (prs.size() == 0) {
def mergedPrs = getJSON("${repoRestUrl}/pulls?state=closed")
if (mergedPrs.size() == 0) {
throw new RuntimeException("No pull-requests found; auth token has likely expired")
}
}
prs.each { pr ->
def id = pr.get("number")
def title = pr.get("title")
def fromRef = pr.get("head")
def fromBranchName = fromRef.get("ref")
def prRepo = fromRef.get("repo")
def repoName = prRepo.get("name")
def prHref = pr.get("url")
createPipeline("${repo}-PR-${id}-${fromBranchName}",
"${prHref} Pull Request ${id}: ${title}", "origin/pr/${id}/head", id)
}
}
createPRJobs()
This creates 1 jenkins job per PR.
This relies on the project having a Jenkinsfile which can be picked up for running a peipeline job. A sample Jenkinsfile will look like below:
//Jenkinsfile for building and creating jobs
commitId = null
repoRestUrl = "${GITHUB_WEB_URL}/repos/${PROJECT}/${REPO}"
try{
stage('Install and Tests') {
runTest("Hello")
}
notify_github 'success'
}catch (Exception e) {
notify_github 'failure'
print e
throw e
}
def runTest(String someDummyVariable) {
node {
checkout scm
sh 'git clean -qdf'
if (env.PULL_REQUEST == ""){
sh 'git rev-parse --verify HEAD > commit.txt'
} else {
// We check out PR after it is merged with master, but we need to report the result against the commit before merge
sh "git rev-parse refs/remotes/origin/pr/${env.PULL_REQUEST}/head^{commit} > commit.txt"
}
commitId = readFile 'commit.txt'
echo commitId
sh 'rm -f commit.txt'
//Here goes your code for doing anything
sh 'echo "Hello World!!!!!"'
}
}
def http_post(url, rawJson) {
def conn = (HttpURLConnection) new URL(url).openConnection()
conn.setRequestProperty("Authorization", "token ${OAUTH_TOKEN}");
conn.doOutput = true
conn.requestMethod = "POST"
conn.setRequestProperty("Content-Type", "application/json")
def wr = new OutputStreamWriter(conn.getOutputStream());
wr.write(rawJson);
wr.close()
def code = conn.getResponseCode()
if (code < 200 || code >= 300){
println 'Failed to post to ' + url
def es = conn.getErrorStream();
if (es != null) {
println es.getText()
}
}
}
def notify_github(state) {
http_post(
"${repoRestUrl}/statuses/${commitId}",
"""
{ "state": "${state}",
"target_url": "${env.BUILD_URL}",
"description": "Build Pipeline",
"context": "Build Pipeline"
}
"""
)
}
Hope this helps someone.