Run Jenkins job only once on changes pushed on github - github

I followed this answer to set up a Jenkins job, and it's working fine.
I have scheduled a Job on github master commit push as
Poll SCM : * * * * *
But, it continuously starts a build Job each minute.
How can I restrict this so that it runs only once per commit push?

There are several options. The two I've used with most success are:
Use a git commit hook to call the Jenkins rest API to start the job. The simple approach is to call the job's build API call directly (something like http://jenkinsmachine:8080/job/your-jobs-name/build), but that hardcodes the job name and branch into the git hook script. A more flexible approach is to use the Git plugin's own rest mini-API, as described by Kohsuke in his blog.
Use something like the Throttle Concurrent Builds plugin or a creative use of slaves nodes and executors to limit the number of cuncurrent builds of that job.
The first option is much preferred, but there are times when rest access to the jenkins machine from the git machine is not available, so the second option can be used in those circumstances.

This approach is actually polling. That means Jenkins is scanning every minute if there aren't any changes in GitHub repository.
If you want true Push from Github to your Jenkins You need to Integrate Github WebHooks with Jenkins. I wrote a blog post on this subject. Scroll to section 2: "Jenkins - Github Integration"
If you are just playing around or using it for your personal open source project, you may want to look into Jenkins alternatives like Drone.io or Codeship.io. These services are generally free for open source and can configure Github Webhooks in few clicks. But they are not suitable for complicated enterprise builds.

Related

Using Gitlab runners from within GitHub Action, or: mirroring pull requests

In my code hosted on GitHub, we perform some tests and quite a bit of post-processing using GitHub Actions. Now, we would like to (or, actually, have to) use Gitlab runners hosted by a supercomputing center to do some further testing and benchmarking. This cannot be done with self-hosted GitHub runners, because I cannot influence their decision. We do not want to move the whole workflow and community over to some Gitlab instance either. So here's my (general) question: Is there a way to use Gitlab runners from within GitHub Actions?
What I have tried and what kind of works is to mirror the repository over to the Gitlab instance and let the runners do their magic there. Using this neat approach, the GitHub Action will wait for the results of the runners and integrate them into its own results. However, this does not work if contributors fork the repository and make pull requests.
In principle, it looks like this could be doable if the contributors also have accounts and corresponding permissions at the Gitlab instance. This is fine for now, because the community is small and the Gitlab instance is accessible to external contributors. Note that manual action from the maintainers of the code (i.e., me) is required before contributors can execute code with the runners for the first time, so we should be fine concerning security.
However, I cannot get this to work for pull requests, because I fail to mirror them. As said, direct pushes are fine, but nothing else works. This leads me to the more specific questions: How can I mirror a pull request from GitHub to a Gitlab repository? How can I enable this for both pull request and pushes (and do I need even more cases)?
Any help is appreciated! I'm really no expert on GitHub Actions, Gitlab runners or even git itself (beyond the basics). If there's a better way to achieve this, I'm happy to hear about it!
I can think of several workarounds:
1. Change what triggers your pipelines
Since you cannot mirror pull requests, but you can mirror branches, adapt the pipeline triggers in Gitlab so the pipelines are launched whenever there is a new commit, instead of a new PR.
You can always use a staging branch if you want to limit the pipeline executions.
2. Use webhooks
If the Gitlab instance is available on the internet, create a GitHub action that triggers a Gitlab pipeline execution whenever there is a PR on Github, or even open a PR directly in Gitlab. It is well documented:
Trigger a pipeline using curl
API to create merge request

Why choose github action when we can just run bash script in github workflow?

Just completed a GitHub workflow using more of them are actions, but also with one bash script.
When writing the workflow, it seems much quicker use bash script than actions.(since some actions are just do one thing. ) Why are the some reasons that we just need GitHub actions rather than bash script or python script trigger?
Or we are just supposed to use script languages for most part, then use GitHub actions for small portion of the whole workflow?
Interesting but not easy to answer with more information about what your goal is. The right answer might depend on your use case.
I have not used GitHub actions yet. Let me try to explain it anyway, starting pretty high level. Unfortunately, there's no option to add a table of contents ;) Please let me know if this helps.
1. What are GitHub Actions for?
From this "What is GitHub Actions? Benefits and examples" PDF file
GitHub Actions is a CI/CD tool for the GitHub flow. You can use it to integrate and deploy code changes to a third-party cloud application platform as well as test, track, and manage code changes. GitHub Actions also supports third-party CI/CD tools, the container platform Docker, and other automation platforms.
From docs.github.com
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository or deploy merged pull requests to production. [...]
GitHub Actions goes beyond just DevOps and lets you run workflows when other events happen in your repository.
2. Continuous Integration/Continuous Deployment (CI/CD)
Usually, people run CI/CD tools to build, deploy, test, and run other tasks while doing that. We use another 3rd party CI/CD pipeline using Rake to build, test, and check links. Our pipeline invokes these small scripts you mention.
3. GitHub actions and scripts
From Essential features of GitHub Actions
If your job generates files that you want to share with another job in the same workflow, or if you want to save the files for later reference, you can store them in GitHub as artifacts. Artifacts are the files created when you build and test your code. For example, artifacts might include binary or package files, test results, screenshots, or log files. Artifacts are associated with the workflow run where they were created and can be used by another job. All actions and workflows called within a run have write access to that run's artifacts.
Here's the key point, I guess. You can really do a lot of crazy stuff within a workflow. All is related/specific to GitHub. Workflows are event-driven, meaning that you can run a series of commands after a specified event has occurred. For example, every time someone creates a pull request, you can automatically run a command that executes a test or other script.
4. GitHub action workflow and scripts
You can include different scripts in your workflow, e.g. using
Javascript: https://github.com/actions/github-script
Python: https://github.com/marketplace/actions/run-python-script
5. (Complex) Examples
You can check out the repository for docs.github.com for some more complex examples, see action-scripts and workflow folders. GitHub themselves seems to use it pretty heavily.
6. Advantages/Disadvantages of GitHub actions
OR: Differences to other CI tools
It took some time to find something not marketing-ish. Key points are:
beginner-friendly using YAML config files
no need to set up your own CI pipeline
You can check out this SO post from 2019 for a list of what's good and bad about GitHub actions.
In short - for readability and the DRY ("Don't repeat yourself") principle.
It's more or less the same as using functions in programming.
I can agree that some trivial actions are useless.
But "actions/checkout" for example is priceless!

How to set up a github pull request build in a Jenkinsfile?

So, I've been using Jenkins for quite a while. I have set up numerous projects with the Github Pull Request Builder plugin to run tests whenever someone opens a pull request, and then trigger some other job (build, push, deploy, etc) whenever the pull request actually gets merged to master.
So, is there any way to set this up with a Jenkinsfile, or the organization folders, or the multibranch build deal?
The github-organization-folder plugin in combination with the multi-branch plugin plugin offers exactly this awesome feature: It scans a whole organization (optionally restricted to certain patterns in repo/branch names) for Jenkinsfiles and automatically adds jobs. This also happens for Pull Requests.
Once the PR is closed, it automatically removes the job.
To avoid arbitrary code execution, an organization member has to trigger building the job (same as for the GPRB plugin). The phrase can be configured in the Jenkins System settings.
EDIT: Under the Advanced section in Jenkins, you find options about what types of PR you want to build. If you build fork PRs, then there's afaik no way to prevent running code without prior inspecting it.
An example, how this looks like:

Run CI build on pull request merge in TeamCity

I have a CI build that is setup in TeamCity that will trigger when a pull request is made in BitBucket (git). It currently builds against the source branch of the pull request but it would be more meaningful if it could build the merged pull request.
My research has left me with the following possible solutions:
Script run as part of build - rather not do it this way if possible
Server/agent plugin - not found enough documentation to figure out if this is possible
Has anyone done this before in TeamCity or have suggestions on how I can achieve it?
Update: (based on John Hoerr answer)
Alternate solution - forget about TeamCity doing the merge, use BitBucket web hooks to create a merged branch like github does and follow John Hoerr's answer.
Add a Branch Specification refs/pull-requests/*/merge to the project's VCS Root. This will cause TeamCity to monitor merged output of pull requests for the default branch.
It sounds to me like the functionality you're looking for is provided via the 'Remote Run' feature of TeamCity. This is basically a personal build with the merged sources and the target merge branch.
https://confluence.jetbrains.com/display/TCD8/Branch+Remote+Run+Trigger
"These branches are regular version control branches and TeamCity does not manage them (i.e. if you no longer need the branch you would need to delete the branch using regular version control means).
By default TeamCity triggers a personal build for the user detected in the last commit of the branch. You might also specify TeamCity user in the name of the branch. To do that use a placeholder TEAMCITY_USERNAME in the pattern and your TeamCity username in the name of the branch, for example pattern remote-run/TEAMCITY_USERNAME/* will match a branch remote-run/joe/my_feature and start a personal build for the TeamCity user joe (if such user exists)."
Then setup a custom "Pull Request Created" Webhook in Bitbucket.
https://confluence.atlassian.com/display/BITBUCKET/Tutorial%3A+Create+and+Trigger+a+Webhook
So for your particular use case with BitBucket integration, you could utilize the WebHook you create, and then have a shell / bash script (depending on your TeamCity Server OS) that runs the remote run git commands automatically, which will in turn automatically trigger the TeamCity Remote Run CI build on your server. You'll then be able to go to the TeamCity UI, +HEAD:remote-run/my_feature branch, and view the Remote Run results on a per-feature basis, and be confident in the build results of the code you merge to your main line of code.
Seems that BitBucket/Stash creates branches for pull requests under:
refs/pull-requests//from
You should be able to setup a remote run for that location, either by the Teamcity run-from-branch feature, or by a http post receive hook in BitBucket/Stash.
You can also use this plugin : https://github.com/ArcBees/teamcity-plugins/wiki/Configuring-Bitbucket-Pull-Requests-Plugin
(Full disclosure : I'm the main contributor :P, and I use it every day)

How to stop TeamCity from building a pull request when it is viewed or commented?

Currently, my team is using TeamCity to automatically build pull requests from GitHub.
We have a configuration to build all the pull requests. In the version control settings of the config, our branch specification is
+:refs/pull/*/merge
In the "Build Triggers" configuration setting, we have only one trigger with the following trigger rule:
+:root=Pull Requests on our Repository:\***/*\*
"Pull Requests on our Repository" is our VCS root name.
The issues:
When someone views a pull request on GitHub website without doing anything else, a build would be triggered in the TeamCity build agent. This is quite annoying, because from time to time, we have multiple build agents building the same pull requests (when multiple people view it).
When someone comments on a pull request, a build would also be triggered.
From my perspective, the only time I want TeamCity to start a build is when new commits are pushed to the pull requests.
Is there a way to do it?
Github's refs/pull/*/merge branches are updated every time mergeability of the branch is recalculated, i.e. on every commit to destination (most likely master) branch. They are also updated when pull request is closed and then reopened. Github's support says these branches are not intended for end users use. The only workaround at the moment is to run builds on refs/pull/*/head branches automatically and on refs/pull/*/merge branches manually.
Do you have TeamCity configured as per this blog post? I then activate the TeamCity service hook in GitHub which takes care of triggering a build in TeamCity whenever there is a push. This seems to do the right thing for me. Or am I missing something?
I know this is old but I wanted to post what we've found as alternatives:
Stop using VCS roots altogether as a mechanism for triggering pull requests. Instead, configure a GitHub webhook to notify a web app of yours whenever there is an update to a PR and only then trigger a build via the TeamCity REST API.
In your build config, add a step that checks what changed in the PR. If nothing changed (i.e. no new commits were added), or if the PR is closed, cancel the build. The problem with this is that the build queue will still be populated with builds that will then be cancelled. Also, you'd have to store somewhere the last commit that was built in order to do the check.
According to their TeamCity's issue tracker, the issue of the TeamCity.GitHub plugin causing an infinite loop of builds was fixed in v9.0