When you create a Github Organisation or a Bitbucket Team/Project, one of the configuration items is:
Project Recognizers: Pipeline Jenkinsfile
There are no other options other than "Pipeline Jenkinsfile", however the fact that the option is even there suggests that the developers envisage people writing their own custom 'recognizers' for projects that don't have a single 'Jenkinsfile' in the top directory of the repo.
Can anyone point me in the direction of any other project recognisers that can be installed and used, or even some details on where to start to implement my own recogniser?
My particular use-case is that within a single repository, we define several workflows that orchestrate actions over code / configuration in the one repo, and I would love to be able to use the Bitbucket Team option to dynamically scan the repo, find all the *.Jenkinsfile files across all branches / pull requests and populate the necessary pipelines.
For example, in the repo are the files:
/pipelines/workflow1.Jenkinsfile
/workflow2.Jenkinsfile
/workflow3.Jenkinsfile
I would like jenkins to create the folder structure:
/team/repo/workflow1/master
/dev
/PR1
/workflow2/master
/dev
/feature-xyz
Any thoughts on where I could start with creating a Project Recognizer to do this (if this is even possible) ?
I think you can do that with providing several Project Recognizers with different names, for example:
Project Recognizers
=========================================
**Pipeline Jenkinsfile**
Script Path: pipeline/workflow1.Jenkinsfile (or path to the file that contains valide Pipeline steps.
=========================================
**Pipeline Jenkinsfile**
Script Path: pipeline/workflow2.Jenkinsfile (or path to the file that contains valide Pipeline steps.
=========================================
**Pipeline Jenkinsfile**
Script Path: pipeline/workflow3.Jenkinsfile (or path to the file that contains valide Pipeline steps.
Another option here, could be Pipeline Shared Groovy Libraries Plugin, more details about this plugin can be found at Extending with Shared Libraries.
This approach gives you ability to use your custom scripts (Classes, steps, etc.) which means that you can define your own flow depending on repo name, project name, etc.
As of now, there should at least be the option to provide an alternative recognizer for the Jenkinsfile. This was added in JENKINS-34561 - Allow to detect different Jenkinsfile filenames. You can see the pull request at jenkinsci/workflow-multibranch-plugin/pull/59 which may help provide some background information in how the recognizers work.
In terms of multiple being recognized from a single source, JENKINS-35415 - Multiple branch projects per repository with different recognizers and JENKINS-43749 - Support multiple Jenkinsfiles from the same repository are requests that are very similar to this one.
A comment from Stephen Connolly in JENKINS-43749 says this about it:
What this is asking for is, instead, to create a computed folder with a pipeline job for each jenkinsfile within the branch.
I think the APIs should support that if somebody wants to take a stab at it. The only issue I see is that we may need to tweak the branch-api to allow for the branch jobs to be a non-job type (i.e. computed folder)
It sounds like you will need to implement a BranchProjectFactory (example: WorkflowBranchProjectFactory) that is a factory for a ComputedFolder (example: WorkflowMultiBranchProject).
Good luck!
Related
I have a repository on Github that contains a notebook I'd like to run automatically. I've looked at this action, which seems useful, but I'm not quite sure how my actions.yaml file should look, as I'm pretty new to Github actions.
Example 1 and Example 2 sections of this Github action's author is an example of what your Github action workflow file should look like.
Since you're the user of this Github action, your repository will contain your workflow file under your .github/workflows directory. Your workflow action file can be named anything, as long as it's sitting in this location; i.e it doesn't have to be named actions.yaml
For another example, you can review the workflow file at my repository, again under .github/workflows. This makes use of another action (and it's currently all commented out as I don't want to run it right now), but you will get the idea and it can help you generalize and understand what goes into a workflow file.
Is it possible to add additional mapping to Get Sources at runtime?
Like in a prejobexecution task?
We are currently using a Powershell script that determines which additional mappings to setup based on iteration, area and different business requirements, maps them to the current workspace and then runs tf get.
This works, however, the changesets and work items from the additional mappings are not linked to the run.
We have also tried a different approach, where a “starter”-pipeline runs the scripts and modifies another pipeline (updates the tfvcMapping) and then invokes it using a build completion trigger.
All changesets and work items are linked, however, the approach does not seem right.
Add additional mappings to Get Sources at runtime (Azure DevOps pipeline - TFVC)
I have encountered a issue very similar to yours before (I use git). Personally, I prefer your second solution, which saves all the linked information (changesets and work items) at the cost of an additional pipeline.
For the first way, just as you test, we will lose some relevant information, which is not what we expected. Although we can use the checkout command to get the latest changesets, we cannot simply complete it for workitems, because it is done by Azure devops. It is difficult for us to obtain the associated workitems through changesets and associate them with our build.
The solution for me, we create a pipeline(as you said starter-pipeline) to invoke the REST API Definitions - Update to update the get source for another pipeline, then hen add build completion trigger:
PUT https://dev.azure.com/{organization}/{project}/_apis/build/definitions/{definitionId}?api-version=5.1
Check the similar request body here.
Hope this helps.
New to VSTS, but not Git. Our small team has the usual mix of web apps, windows apps, other misc applications/services and we keep our database objects in visual studio sql server projects as well. So there are 15-20 or so different sets of code to deal with. Currently, each would have its own Git repo.
Was reading this post regarding single vs multiple "Team Projects".
Then, I posted this earlier but was specific to backlog items, but I suppose my real question was about the bigger picture regarding the idea of a "team project"
What would be a good structure for a small team with this number of applications. Assuming each application generally is worked on independently, but you might want to build 2 or more of these applications together.
How about one team project. Multiple "teams", one per "application".
Its the terminology that is throwing me off.
Can different teams each use a different repo?
Can each team have a different set of build definitions? eg. dev/prod etc
A team project is a container for a portfolio of related applications. A team project can contain one or more source code repos (Git/TFVC), builds, releases, test cases, work items, etc. All of these entities have ways of defining security around who can view/modify them.
A team is just an organizational structure within a team project. You can use security permissions to limit certain repos, builds (or build folders), etc to a certain team.
The generally accepted guidance is to keep everything in a single team project. There are lots of things that don't cross team project boundaries, such as repos. Work-arounds usually exist, but they are typically awkward.
The one requirement you gave ([we] might want to build 2 or more of these applications together) is actually slightly tricky regardless of whether the repos are in one or in multiple team projects -- a build definition can be hooked up to a single repo. If you need to bring in additional repos, you'll need to use submodules or add an extra build step to clone the second repo. I can almost guarantee it'll be easier if everything is in the same team project, though.
The one-word answer to the two direct questions you asked is "Yes."
How you set up your structure is really depending. There are many ways to organize it. Single repo, multiple repos.
If you are using CI builds, have in mind that the get sources task in your build will download your full repo. So if you have a single repo strategy your build could take longer to run. In this scenario you would also have more work to setup your builds and specify path filters to trigger only the correct build on your CI process.
Can different teams each use a different repo?
Yes they can.
You can create a security group for each team.
Then, on you team you can remove it from Contributos and add your new group as part of Member of:
After that, in your version control settings, add your new security group and remove or deny access to Contributos security group. This way, only your team security group will have access to the repo.
This is optional. You only need to do it if you want to isolate access to your repos.
Can each team have a different set of build definitions? eg. dev/prod
etc
You can setup a build for each of your repos.
If you need to isolate who has access, you can do it by changing the security on each build, removing contributos and adding your security group.
I'm making a website and making multiple design prototypes. I want to keep all of them for future reference.
Is this a suitable place to use branches, or should I just put them in folders?
How should I manage external dependencies (e.g. jQuery), should I include a minified version for every design or keep one for the whole project or just link to an online version?
Branches are fine if:
you want to compare the differences of the same file across several variation of your website
develop in parallel said variations, in isolation (See "When should you branch?").
That won't prevent you top deploy those in different directories (you simply checkout each branch in different folder on your web server)
Any common part (like some JQuery script) should be in a sub-directory which is versioned in its own repo and referenced by your main web repo as a submodule.
I just want to know if the search result in eclipse search view can be shared with fellow team mate as it is.
I perform a search and delete few unwanted entries and then send it to him/ her
The other person shall be able to view it exactly same manner in the search view.
Is there a way to do this?
The will be very helpful for me
You should be looking at the Mylyn project (http://eclipse.org/mylyn).
This project allows you to create tasks and send them to co-workers through a task repository (such as bugzilla, jira, or most major issue trackers). Attached to these tasks are "contexts", which associate code elements (methods, fields, classes, etc) with the task.
Here is what you would need to do:
Install mylyn (you and all co-workers)
Install the proper connector for your issue tracker (most major issue trackers have one). If you are not using an issue tracker, then you can still import and export tasks as files, but it is less easy to do, and I'd recommend using an issue tracker anyway.
Now add the task repository to your Eclipse. This is the way that mylyn speaks to your issue tracker. It allows you to create issues, bug reports, tasks, etc, from within Eclipse.
With this set up, you can now create a task associated with a task repository and activate it. You can add the desired program elements to your task by right clicking -> Mark as Landmark.
Once you have your task context complete, you can then attach the context to the remote repository (essentially attaching a zip file to the issue in your issue tracker). Other users can then retrieve the context and immediately start working with the context that you created.
It is really a great way to work when you need to share information about specific features in the code to other people on the project.