I would ask you a question about Jenkins :
Is it possible to apply several pipeline to one github repository ?
If yes, someone can help me with tutorial our documents or other things ?
I've tryed to do this with the BlueOcean plugin but this seems impossible ...
I would like to apply 6 pipepline on one github repository triggerd by events so this question is essential to me !
When you create a job on Jenkins you bind it to a repo.
If you create two jobs and bind them both to the same repo they will both be triggered when you push to your repo. Then you attach your pipelines to these jobs.
You can create pipeline jobs pointing at different build scripts (ie, filenames other than /Jenkinsfile) in the repo or different branches just fine. I do it all the time.
However, be aware that any scm polling you do on the repo/branch could prevent other jobs from identifying changes depending on how you are triggering them.
Related
In my code hosted on GitHub, we perform some tests and quite a bit of post-processing using GitHub Actions. Now, we would like to (or, actually, have to) use Gitlab runners hosted by a supercomputing center to do some further testing and benchmarking. This cannot be done with self-hosted GitHub runners, because I cannot influence their decision. We do not want to move the whole workflow and community over to some Gitlab instance either. So here's my (general) question: Is there a way to use Gitlab runners from within GitHub Actions?
What I have tried and what kind of works is to mirror the repository over to the Gitlab instance and let the runners do their magic there. Using this neat approach, the GitHub Action will wait for the results of the runners and integrate them into its own results. However, this does not work if contributors fork the repository and make pull requests.
In principle, it looks like this could be doable if the contributors also have accounts and corresponding permissions at the Gitlab instance. This is fine for now, because the community is small and the Gitlab instance is accessible to external contributors. Note that manual action from the maintainers of the code (i.e., me) is required before contributors can execute code with the runners for the first time, so we should be fine concerning security.
However, I cannot get this to work for pull requests, because I fail to mirror them. As said, direct pushes are fine, but nothing else works. This leads me to the more specific questions: How can I mirror a pull request from GitHub to a Gitlab repository? How can I enable this for both pull request and pushes (and do I need even more cases)?
Any help is appreciated! I'm really no expert on GitHub Actions, Gitlab runners or even git itself (beyond the basics). If there's a better way to achieve this, I'm happy to hear about it!
I can think of several workarounds:
1. Change what triggers your pipelines
Since you cannot mirror pull requests, but you can mirror branches, adapt the pipeline triggers in Gitlab so the pipelines are launched whenever there is a new commit, instead of a new PR.
You can always use a staging branch if you want to limit the pipeline executions.
2. Use webhooks
If the Gitlab instance is available on the internet, create a GitHub action that triggers a Gitlab pipeline execution whenever there is a PR on Github, or even open a PR directly in Gitlab. It is well documented:
Trigger a pipeline using curl
API to create merge request
Anybody know if it is possible to pass in a repo name / base the build on a dynamic repo name? This would allow us to share the same build definition across different branches, cutting down on definitions when creating a feature branch, etc.
When using a TFVC repo we would store the different releases in the same repo but different paths. We could reuse the same build definition across different releases/FB's by altering the source path such as $/product/$(release)/......
It appears Git likes to have the repo hard-coded into the build (hence the dropdown - no way to plug in a variable.
While the question is targeted to On-prem Azure DevOps, if it is possible in the hosted environment it would be helpful to know.
I recommend using YAML build templates. By default these check out "self" and are stored in the repo. That way they work on forks, branches etc. Each branch can contain tweaks to the build process as well.
With the 'old' UI based builds this isn't possible.
What you are looking for is actually two things:
templates - this allows you reuse definition accross different pipelines
triggers - this allows you to trigger pipeline when commit happens on different branches
Looks like Task Groups solved the need (mostly). I was hoping to have one build definition that could be shared across multiple branches; while this appears to be possible on the hosted model, on prem is different.
I am able to clone a build (or use templates) to have an entry point into the repo/branch to get the sources, then pass off the work to a common task group. If I need to modify the build process for multiple branches, just modify the task group.
I want to create automated deployment pipeline for azure datafactory.
For one stream of development we can configure it using doc
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
But when it comes to deploying to two diff test datafactories for parrallel features development (in two different branches), it is not working because the adb_publish which gets generated is only specific to the one datafactory.
Currently we are doing deployement using powershell scripts and passing object list which needs to be deployed.
Our repo is in Azure devops.
I tried
linking the repo to multiple df but then it is causing issue, perhaps when finding deltas to publish.
Creating forks of repo instead of branches so that adb_publish can be seperate for the every datafactory - but this approach will not work when there is a conflict, which needs manual merge, so the testing will be required again instead of moving to prod.
Adf_publish get generated whenever you publish. Publishing takes whatever you have in your repo and updates data factory with it.
To develop multiple features in parallel, you need to just use "Save". Save will commit your changes to the branch you are actually working on. Other branches will do the same. Whenever you want to publish, you need to first make a pull request from your branch to master, then publish. Any merge conflict should be solved when merging everything in the master branch. Then just publish and there shouldn't be any conflicts, and adf_publish will get generated after that.
Hope this helped!
Since a GitHub repository can be associated with only one data factory. And you are only allowed to publish to the Data Factory service from your collaboration branch. Check this
It seems there is not a direct and easy way to accomplish this. If forking repo as workaround, you may have to solve the conflicts before merging as #Martin suggested.
I'd like to make a new git branch, add a commit, and then push to github. In addition, it would be great to create a PR for that branch straight from Jenkins job.
Has anyone done it yet? The part I'm struggling is how to create a PR. For creating a branch and commit, I'm running regular git commands in the shell.
Thanks, N.
Sounds like you want the pipeline multi branch plugin there's a blog here https://jenkins.io/blog/2015/12/03/pipeline-as-code-with-multibranch-workflows-in-jenkins/ that might help too. We use this plugin on the fabric8 project and it works great.
Correction: I misread the question initially. We use a shared pipeline library that contains reusable functions to make pull requests. This is an example where we make version update PRs on downstream repos once a release has finished. The groovy code that interacts with the github api is here
I wanted to know how I can build a Jenkins job right after merging a pull request into the master branch.
I'm very new in this Jenkins/Github thing and wanted to know how/if it's possible to achieve this without using webhooks.
Best Regards
Luca
You need to create a CI file. Basically it's a file that tells Jenkins what it can do, how to do it and when to do it. You then create a build job that should be run on master branch and set its firing hook as automatic . That's a tl;dr version that sketches how it's done in general. For specifics you have to check a manual.