How to access Build properties on waterfall page inside buildbot stages? - buildbot

How can I access the build properties like event.change.id etc that are displayed on buildbot waterfall page, inside one of the buildbot stages, cbuildbot_stages.py?
-Pratibha

Buildbot offers a mechanism to access those properties. It is described in http://docs.buildbot.net/current/manual/cfg-properties.html#using-properties-in-steps
You will need to use Interpolate class to get the value that you are looking for: for example, Interpolate('%(prop:event.change.id)s.
Please note the introduction section that describes possible mistakes people make when they start using this functionality.

What and where is this file cbuildbot_stages.py ?
You could override the default buildbot behavior with what you want in your buildbot config file. Standard OOP practice.

Well I am using buildbot to build chromium os. cbuildbot_stages.py script is in /chromite/buildbot/ directory.
I want to access the gerrit change id in buildbot stages.

Related

How do I change values in a file after publishing to GitHub?

I have a Python project that I am looking to publish on GitHub. This project has a couple of variables in one of the files that needs to have their values obfuscated. Ie: API Key, user/password, etc.
My test code has those variables filled with my own data, but I want to boilerplate them when I push changes, for obvious reasons.
Would I be on the right track looking at a GitHub action to accomplish this? If so, any pointers towards what kind of action I should be looking for that is most appropriate for this kind of task?
You should look into dotenv, which allows you to have a .env file or to have OS environment vars set to pull those private information to use in your code without have it set directly. One good tool for this is pydantic BaseSettings, which you should install via:
pip install pydantic[dotenv]
One nice thing I like about pydantic is that you can either have a .env or have environment variables, it will work.
If you have Continuous Integration (CI), you can add GitHub Secrets, which can be pulled in for your test runs with private API keys. You'll need to properly call them with GitHub contexts.
Don't put those values in your code. Read them from an environment variable or from a file instead. Then whoever wants to use your projects only needs to provide said env vars or said file. Check Keep specific git branch offline.
GitHub actions seems overcomplicated imo. Keep in mind that if you already made a commit with those to-be-obfuscated variables, then they will be visible by going to those commits

Custom Project Recognisers for Jenkins Multi-branch Pipelines

When you create a Github Organisation or a Bitbucket Team/Project, one of the configuration items is:
Project Recognizers: Pipeline Jenkinsfile
There are no other options other than "Pipeline Jenkinsfile", however the fact that the option is even there suggests that the developers envisage people writing their own custom 'recognizers' for projects that don't have a single 'Jenkinsfile' in the top directory of the repo.
Can anyone point me in the direction of any other project recognisers that can be installed and used, or even some details on where to start to implement my own recogniser?
My particular use-case is that within a single repository, we define several workflows that orchestrate actions over code / configuration in the one repo, and I would love to be able to use the Bitbucket Team option to dynamically scan the repo, find all the *.Jenkinsfile files across all branches / pull requests and populate the necessary pipelines.
For example, in the repo are the files:
/pipelines/workflow1.Jenkinsfile
/workflow2.Jenkinsfile
/workflow3.Jenkinsfile
I would like jenkins to create the folder structure:
/team/repo/workflow1/master
/dev
/PR1
/workflow2/master
/dev
/feature-xyz
Any thoughts on where I could start with creating a Project Recognizer to do this (if this is even possible) ?
I think you can do that with providing several Project Recognizers with different names, for example:
Project Recognizers
=========================================
**Pipeline Jenkinsfile**
Script Path: pipeline/workflow1.Jenkinsfile (or path to the file that contains valide Pipeline steps.
=========================================
**Pipeline Jenkinsfile**
Script Path: pipeline/workflow2.Jenkinsfile (or path to the file that contains valide Pipeline steps.
=========================================
**Pipeline Jenkinsfile**
Script Path: pipeline/workflow3.Jenkinsfile (or path to the file that contains valide Pipeline steps.
Another option here, could be Pipeline Shared Groovy Libraries Plugin, more details about this plugin can be found at Extending with Shared Libraries.
This approach gives you ability to use your custom scripts (Classes, steps, etc.) which means that you can define your own flow depending on repo name, project name, etc.
As of now, there should at least be the option to provide an alternative recognizer for the Jenkinsfile. This was added in JENKINS-34561 - Allow to detect different Jenkinsfile filenames. You can see the pull request at jenkinsci/workflow-multibranch-plugin/pull/59 which may help provide some background information in how the recognizers work.
In terms of multiple being recognized from a single source, JENKINS-35415 - Multiple branch projects per repository with different recognizers and JENKINS-43749 - Support multiple Jenkinsfiles from the same repository are requests that are very similar to this one.
A comment from Stephen Connolly in JENKINS-43749 says this about it:
What this is asking for is, instead, to create a computed folder with a pipeline job for each jenkinsfile within the branch.
I think the APIs should support that if somebody wants to take a stab at it. The only issue I see is that we may need to tweak the branch-api to allow for the branch jobs to be a non-job type (i.e. computed folder)
It sounds like you will need to implement a BranchProjectFactory (example: WorkflowBranchProjectFactory) that is a factory for a ComputedFolder (example: WorkflowMultiBranchProject).
Good luck!

What are the status tags, like [build | passing]

May be a well known question, but actually I'm asking this, because I'm not familiar with these tags.
I've seen similar types of tags in various Github projects, especially in their README.md. My questions are manifold regarding these:
What's the purpose of these tags?
How to generate them?
Is there any good practice or documentation that suggests what types of tags can be used or should be used in a project?
When a project can be determined viable using such a tag?
What's the purpose of these tags?
These images are provided by external services, often continuous integration services, and are used to show interesting information about the repository.
For example, the first badge you show in your example says that the build is "passing" (the exact definition of this will be build-specific, but it commonly means that the tests pass and nothing blew up during the most recent build).
The third example, coverage: 12%, is a code coverage report.
How to generate them?
Each service will have its own way.
The second badge in your example is from Scrutinizer, and unfortunately I can't find documentation about its badges. But most badging systems work by giving you a link for each project or job that you can use on your website or GitHub or whatever, and when a build happens the badge's appearance is updated accordingly.
The Travis CI documentation contains a good example.
Having answered the first to questions, I think your last two largely disappear. The badges that can be used are determined by whatever services you can find. The badges that should be used are entirely up to you.

How to link continuous integration to my latest sprint trunk?

Using a continuous integration on my project, I need to checkout the code from latest sprint from BAZAAR as bzr://path/to/myproject/sprint/123
As this path is changing repeatedly (for each sprint), I'm currently using externals to create a bzr://path/to/myproject/current pointing to bzr://path/to/myproject/sprint/123.
So, I just need to change the externals to redirect the continuous integration tool to the latest project.
Is there another way to do this ?
What I don't want is to change the configuration of my project inside the continuous integration tool (CruiseControl.NET).
One option (might not be suitable for your teams' processes) would be to stop using a separate "sprint" location in bzr for each iteration's changes. Instead, just use a "trunk" (or perhaps your "current" above). If you are usually in a situation where you have multiple sprints having changes at the same time, then this would probably not be appropriate.
I suppose you can use a lightweight checkout.
bzr checkout --lightweight bzr://path/to/myproject/iterations/123 bzr://path/to/myproject/current
You can then use bzr switch to switch to the next branch (I'm not sure if it will work over the network):
bzr switch -d bzr://path/to/myproject/current bzr://path/to/myproject/iterations/124
After searching the web, I've found some articles about this question.
There are two solutions so far:
Automatically detect newly finished branch and build them. There is an example here using CC.NET. It is so applicable to my iterations.
Another way is to provide scripts to developer that execute most of the CI tool. This is not perfect, but this may detect issues before merging in the trunk.
Other references:
Best branching strategy when doing continuous integration?

Which version control programs can enforce running & passing of tests before integration of changes?

At my work we currently use Aegis version control/SCM. The way we have it configured, we have a bunch of tests, and it forces the following things to be true before a change can be integrated:
The full set of tests must have been run.
All tests must have passed.
With test-driven development (TDD) these seem like sensible requirements. But I haven't heard of any way you can do this with any other version control systems. (We're not currently planning to switch, but I would like to know how to do it in the future without using Aegis.)
I would be interested in any VCS (distributed or not) that can do this, I'm also interested in any plugins/extensions to existing VCS that allow this. Preferably open source software.
ETA: OK, it seems like the usual thing to do is have VCS + continuous integration software, and running the tests is automated as part of the build, instead of as a separate step. If I understand correctly, that still lets you commit code that doesn't pass the tests, just you get notified about it -- is that right? Is there anything that would stop you from being able to integrate/commit it at all?
IMO you're much better off using a continuous integration system like CruiseControl or Hudson if you want to enforce that your tests pass, and make the build rather than the check-in dependent on the tests results. The tools are straightforward to set up, and you get the advantages of built-in notification of the results (via email, RSS or browser plugins) and test results reporting via a Web page.
Regarding the update to the question, you're right - VCS + CI lets you commit code that doesn't pass the tests; with most CI setups, you just won't get a final build of your product unless all the tests pass. If you really want to stop anyone from even committing unless all the tests pass you will have to use hooks in the VCS as others have suggested. However, this looks to me to be hard to deal with - either developers would have to run all of the tests every time they made a checkin, including tests that aren't relevant to the checkin they are making, or you'd have to make some very granular VCS hooks that only run the tests that are relevant to a given checkin. In my experience, it's much more efficient to rely on developers to run the relevant tests locally, and have the CI system pick up on the occasional mistakes.
With subversion and git you can add pre-commit hooks to do this.
It sounds like you need to look at Continuous Intergration (or a variant of).
Think Git has a hook on apply patch too.
Subversion and git both support this via pre-commit hooks.
Visual Studio Team System supports this natively via checkin policies.
I believe that Rational ClearCase also supports it, though I've never seen that demonstrated so I can't say for certain.
We use git and buildbot to do something similar, though not quite the same. We give each developer their own Git repository, and have the buildbot set to build any time pushes to one of those repositories. Then there is someone who acts as the integrator, who can check the buildbot status, review changes, and merge their changes or tell them to fix something as appropriate.
There are plenty of variations of this workflow that you could do with Git. If you didn't want to have someone be the integrator manually, you could probably set the buildbot up to run a script on success, which would automatically merge that person's change into the master repository (though it would have to deal with cases in which automatic merge didn't work, and it would have to test the merge results as well since even code that merges cleanly can sometimes introduce other problems).
I believe continuous integration software such as team city allow you to do pre-commit build and test. I don't know of any vcs that provides it directly...there may be some like the one you use but I'm not familiar with them.
You can also use pre-commit hooks in Perforce. And, if you're a .NET shop, Visual Studio can be configured to require "gated" checkins.
VSTS with custom Work Items, right? I don't see anything wrong with using this. Built in reporting. The choice to automate. Why not?
What I do here is following a branch per task pattern which lets you test the code already submmitted to version control but still keeping the mainline pristine. More on this pattern here.
You can find more information about integration strategies here and also comments about Mark Shuttleworth on version control here.
Most CI implementations have a mechanism to reject check-ins that don't meet all the criteria (most notably pass all the tests). They're called by different names.
VCS should do what they do best.. version source code.
TeamCity - Pre-tested commit
TFS - Gated check-ins