How to make SCM polling work with the Jenkins Workflow plugin - jenkins-workflow

In a normal freestyle project, I configure the SCM plugin to point to the Git repo that I want to release, and I enable the "Poll SCM" option, which allows me to configure a Stash webhook to tell Jenkins whenever there has been a change to that repo. In this way, the job can be triggered whenever a change is pushed to the repo.
But when I use a workflow instead of a freestyle project, the SCM of the code that I need to build is specified programmatically in the groovy workflow script, which means that it is not listening for the Stash webhook. Instead, the SCM that is configured directly in the workflow is the SCM of the groovy script itself, which is different than the codebase that I am trying to build/release, so I don't want the trigger to be based on that.
node('docker_builder') {
git url: serviceRepo
releaseVersion = getVersion()
pipelineSpec = getPipelineSpec()
sh "./gradlew clean build pushDockerImage"
}
Any ideas about how to achieve SCM polling when using the workflow plugin?

I have resolved this question with lots of research and experimentation. This documentation got me on the right track: https://github.com/jenkinsci/workflow-scm-step-plugin/blob/master/README.md. It says:
Polling is supported across multiple SCMs (changes in one or more will trigger a new build), and again is done according to the SCMs used in the last build of the workflow."
This means that SCM polling is still supported with a Jenkins workflow, but unlike a normal freestyle project, you have to run it once manually before it starts listening for SCM changes. This makes sense because the SCM's are defined in Groovy code; they are not known until they run once.
One tricky element of this is that you can define many SCM's in your workflow. For example, I have three: one for the service itself, a deployment script, and the Groovy workflow DSL. By default, changes to any of those three SCM's would cause the "SCM poll" option to trigger a build, which may not be desirable. Luckily, setting the "poll: false" option on the "git" step in the Groovy code will disable polling on that repo. If you are reading your Groovy DSL from an SCM, then you can disable polling on that repo by clicking "additional behaviors" in the Jenkins UI and adding "Don't trigger a build on commit notifications".
Another tricky element is that the Stash web hook plugin by default includes the SHA1 hash code of the commit in the RESTful URL that it hits Jenkins with. Unfortunately, Jenkins makes the mistake of using that same commit code when it tries to pull any of the multiple SCM's that you may have defined. The hashcode is of course only relevant to one SCM, so it breaks. You can get around this by setting "Omit SHA1 Hash Code" in the Stash web hook plugin. Then Jenkins will just use the latest commit on whatever branch you build from in each of your SCM's.

Related

Prevent a Jenkins build caused by git push in same job

I have written a Jenkinsfile for use in a GitHub Organization object in Jenkins. Part of the Jenkinsfile is error correcting, and often makes changes to the project. At the end of the pipeline, Jenkins commits the changes to our git repository.
Now that I have all of this working, I have realized that upon pushing the corrections to the repo, the same job that just ran and corrected the errors will run again, since it is triggered by webhooks. It wouldn't be an infinite loop because there would be no errors to correct after running the first time, but it would run one more time than it needs to.
The fact that the changes made by the job end up triggering the job is problematic because the pipeline includes a deploy step. Redeploying the same code twice in a row feels like the result of a poorly written pipeline. That said, short of searching the last commit message for keywords, I cannot figure out a good way to make commits from the job not trigger the job. Is there a less hacky way to solve this problem?
edit: I am aware of the ci-skip plugin, but I can't documentation for making it work in a declarative pipeline. I would prefer an approach that lets me do something with the default scm checkout step at the start of the build.
edit 2: I ended up just running a grep of the last commit message everytime the pipeline starts to see if the message contained the text [ci skip]. Its dirty, but it works if its placed in a setup stage of the pipeline.
In the configuration to checkout a git repo, you can specify there to ignore certain commit messages (advanced checkout behavior) Rather than using code to do it. However, I'm not sure if if a github org job can do the same thing.
Another option would be to ignore certain directories if the changes are confined to certain directories, and those directories aren't normally changes otherwise. Again, not sure if github org supports this.
Don't use webhook triggeers, possibly, but this isn't a preferred option.

How to trigger E2E tests when repositories are changed or pull requests added?

The only similar question I found is the following: Trigger travis ci builds if another git repository updates
Keep in mind that our E2E tests are full stack: we have a live server running, and our tests are run against the UI which hits the server. Nothing is stubbed or mocked.
Now, I can trigger in that way a travis rebuild, however problem arises when I have interdependency between branch and a different repository branch.
So let's say I have 3 repositories: backend, frontend and e2etests. If I create the new branch frontend/foo which requires also the backend/foo, there is no way e2e tests will ever pass because they will run one time with frontend/foo and backend/master and another time with frontend/master and backend/foo.
Did anyone face this problem previously? How did you deal with it?
After a lot of digging it was clear that there is no solution and everyone has their own built-in.
What worked for us is a git pre-push hook, environment variables pointing to our repositories and when you push it asks if you want to use a different branch for the UI project (if you are on the UI, asks if you want to use a different branch for the server project) and by default it uses the current branch for the project you are on. So it's quite fast on push, usually you'll just press ENTER an additional time, the two branches are basically the required dependencies and a travis build will be triggered with an API endpoint.
Works fine so far.

How to set up a github pull request build in a Jenkinsfile?

So, I've been using Jenkins for quite a while. I have set up numerous projects with the Github Pull Request Builder plugin to run tests whenever someone opens a pull request, and then trigger some other job (build, push, deploy, etc) whenever the pull request actually gets merged to master.
So, is there any way to set this up with a Jenkinsfile, or the organization folders, or the multibranch build deal?
The github-organization-folder plugin in combination with the multi-branch plugin plugin offers exactly this awesome feature: It scans a whole organization (optionally restricted to certain patterns in repo/branch names) for Jenkinsfiles and automatically adds jobs. This also happens for Pull Requests.
Once the PR is closed, it automatically removes the job.
To avoid arbitrary code execution, an organization member has to trigger building the job (same as for the GPRB plugin). The phrase can be configured in the Jenkins System settings.
EDIT: Under the Advanced section in Jenkins, you find options about what types of PR you want to build. If you build fork PRs, then there's afaik no way to prevent running code without prior inspecting it.
An example, how this looks like:

Run CI build on pull request merge in TeamCity

I have a CI build that is setup in TeamCity that will trigger when a pull request is made in BitBucket (git). It currently builds against the source branch of the pull request but it would be more meaningful if it could build the merged pull request.
My research has left me with the following possible solutions:
Script run as part of build - rather not do it this way if possible
Server/agent plugin - not found enough documentation to figure out if this is possible
Has anyone done this before in TeamCity or have suggestions on how I can achieve it?
Update: (based on John Hoerr answer)
Alternate solution - forget about TeamCity doing the merge, use BitBucket web hooks to create a merged branch like github does and follow John Hoerr's answer.
Add a Branch Specification refs/pull-requests/*/merge to the project's VCS Root. This will cause TeamCity to monitor merged output of pull requests for the default branch.
It sounds to me like the functionality you're looking for is provided via the 'Remote Run' feature of TeamCity. This is basically a personal build with the merged sources and the target merge branch.
https://confluence.jetbrains.com/display/TCD8/Branch+Remote+Run+Trigger
"These branches are regular version control branches and TeamCity does not manage them (i.e. if you no longer need the branch you would need to delete the branch using regular version control means).
By default TeamCity triggers a personal build for the user detected in the last commit of the branch. You might also specify TeamCity user in the name of the branch. To do that use a placeholder TEAMCITY_USERNAME in the pattern and your TeamCity username in the name of the branch, for example pattern remote-run/TEAMCITY_USERNAME/* will match a branch remote-run/joe/my_feature and start a personal build for the TeamCity user joe (if such user exists)."
Then setup a custom "Pull Request Created" Webhook in Bitbucket.
https://confluence.atlassian.com/display/BITBUCKET/Tutorial%3A+Create+and+Trigger+a+Webhook
So for your particular use case with BitBucket integration, you could utilize the WebHook you create, and then have a shell / bash script (depending on your TeamCity Server OS) that runs the remote run git commands automatically, which will in turn automatically trigger the TeamCity Remote Run CI build on your server. You'll then be able to go to the TeamCity UI, +HEAD:remote-run/my_feature branch, and view the Remote Run results on a per-feature basis, and be confident in the build results of the code you merge to your main line of code.
Seems that BitBucket/Stash creates branches for pull requests under:
refs/pull-requests//from
You should be able to setup a remote run for that location, either by the Teamcity run-from-branch feature, or by a http post receive hook in BitBucket/Stash.
You can also use this plugin : https://github.com/ArcBees/teamcity-plugins/wiki/Configuring-Bitbucket-Pull-Requests-Plugin
(Full disclosure : I'm the main contributor :P, and I use it every day)

How to stop TeamCity from building a pull request when it is viewed or commented?

Currently, my team is using TeamCity to automatically build pull requests from GitHub.
We have a configuration to build all the pull requests. In the version control settings of the config, our branch specification is
+:refs/pull/*/merge
In the "Build Triggers" configuration setting, we have only one trigger with the following trigger rule:
+:root=Pull Requests on our Repository:\***/*\*
"Pull Requests on our Repository" is our VCS root name.
The issues:
When someone views a pull request on GitHub website without doing anything else, a build would be triggered in the TeamCity build agent. This is quite annoying, because from time to time, we have multiple build agents building the same pull requests (when multiple people view it).
When someone comments on a pull request, a build would also be triggered.
From my perspective, the only time I want TeamCity to start a build is when new commits are pushed to the pull requests.
Is there a way to do it?
Github's refs/pull/*/merge branches are updated every time mergeability of the branch is recalculated, i.e. on every commit to destination (most likely master) branch. They are also updated when pull request is closed and then reopened. Github's support says these branches are not intended for end users use. The only workaround at the moment is to run builds on refs/pull/*/head branches automatically and on refs/pull/*/merge branches manually.
Do you have TeamCity configured as per this blog post? I then activate the TeamCity service hook in GitHub which takes care of triggering a build in TeamCity whenever there is a push. This seems to do the right thing for me. Or am I missing something?
I know this is old but I wanted to post what we've found as alternatives:
Stop using VCS roots altogether as a mechanism for triggering pull requests. Instead, configure a GitHub webhook to notify a web app of yours whenever there is an update to a PR and only then trigger a build via the TeamCity REST API.
In your build config, add a step that checks what changed in the PR. If nothing changed (i.e. no new commits were added), or if the PR is closed, cancel the build. The problem with this is that the build queue will still be populated with builds that will then be cancelled. Also, you'd have to store somewhere the last commit that was built in order to do the check.
According to their TeamCity's issue tracker, the issue of the TeamCity.GitHub plugin causing an infinite loop of builds was fixed in v9.0