Azure DevOps pipeline: Can I turn all warnings into errors? - azure-devops

There are many reasons why Azure DevOps (ADS) pipelines may raise warnings but continue execution (for example due to bad task design). As pipelines are often part of an automated workflow (e.g. nightly builds, pull request validation builds), the pipeline result page will not be looked at by a human in most cases. As a result, those warnings may be left unnoticed or ignored for a very long time.
To make sure that these warnings are taken care of, I would like to make them fail the pipeline, similar to the --fatal-warnings flag commonly used in compilers.
I tried searching the official docmentation, Microsoft's issue tracker on pipeline tasks and the internet for similar flags, but I could not find anything useful.
Do you have any idea how to make warnings in ADS pipelines harder to miss and ignore?

Related

Is there any tool what can analyze work items/code in Azure DevOps for security issues, leaks or violate data laws

We are using Azure DevOps. We are using it for repos, pipelines, tests, and task management.
We have hundreds of projects, and we assume that there are a lot of ignored violations not only in the repo but also in work items.
Manually go through all projects, kind of not fun job.
Maybe someone has experience with challenges like this.
We want to find a tool that can automatically scan our Azure DevOps work items, repo, etc., and send notifications if he found some violations.

How to implement custom flaky test detection in Azure DevOps Pipeline

Recent Azure DevOps article about Flaky test management mentions custom detection of Flaky Tests, but I can't find any documentation about how to implement that.
I thought it might be part of Publish test results task but I see no mention of Flaky Tests in there.
There is indeed a missing of documents about how to implement custom flaky test detection.
I reported this issue to microsoft developer community. you can keep track on this thread

continuous integration pain points

Recently my fledgling team (just two devs) attempted to implement continuous delivery practices as described by Jez Humble.
That is we ditched feature branches and pull requests (in git) and aimed to commit to the mainline branch every day at least.
We have a comprehensive unit and functional test suite for both the front and back end which is triggered automatically by Jenkins, when pushing to git.
We configured a feature switching app and resolved to use it for longer running features.
However, we encountered several problems and I'm curious to get a perspective from people who are successfully using this approach.
Delays due to Vetting/ Manual QA process
often tasks were small enough that we didn't think they warranted configuring feature switching, e.g. adding an extra field to a form, or changing some field labels. However, for various reasons that ticket would become blocked (e.g. some unforeseen aspect of the task needing UX input).
This would mean mainline ended up in a compromised state whilst we waited for external dependencies to unblock the task. Often we'd be saying "we can't deploy anything until Thursday, as that's when we can get an IA review"
The answer here is probably a much tighter vetting of which tasks are started. However, it was often difficult to completely anticipate every potential blocker. Maybe if a task becomes blocked additional dev should be done to add a feature switch, or revert the commits? Tricky situation.
Issues with code review during integration on mainline branch
Branches and pull requests give a nice breakdown of changes made on a single task. However, attempting CD we ended up with a mish-mash of unrelated commits on mainline, and the code reviewer having to somehow piece together commits that related to the task he was reviewing. And often there'd be a number of additional minor bug fixes, changes in response to review type commits at the end of a task. Essentially we couldn't figure out a clean way to code review work with this set up.
Generic code review issues
We used phabricator for a bit to do post-commit code reviews, but found it was flagging every single commit (some very minor) for code review, rather than showing us a list of changes per individual dev task. So it made reviewing the code onerous compared to git pull requests. Is there a better way?
We've now reverted back to short lived feature branching in git and raising pull requests to initiate code review and it's a nice set up, but if we could fix the issues we're having with non-feature branching CD, then we'd like to re-attempt that approach.
Could you automate the vetting process and/or run it before you integrate. If you automate the vetting process, for ex adding a form/button etc, you just need a suite of test to run post integration to validate that your mainline is not broken
You need to code review before integration i.e on the pull request . If issues are caught during a code review and fixed, the pull request is updated and the mainline is not messed.
Code review tools are very specific to a group of developers and the team needs.I suggest you play with a few code review tools to see which one suits your needs
Based on most of your questions, I would recommend running all your Vetting/code review etc before you merge.(You can do it in increments if the process is too cumbersome) and running an automated suite of tests for all the stuff that you want to do post integration.
If the process setup that you have in your team is too complicated to be finished in a day and can have multiple iterations , then it is worthwilefor you to evaluate a modified version of gitflow than a fork based CI model
If you use feature branches to work on tasks when you finish with a task you can either merge it back to the integration branch or create a pull request for the merge back to the integration branch.
In both case you get a merge commit, a summary of every change you made on the feature branch.
Do you need something more than this?

Github pull request hooks, static code analysis and provisional rollback

Does github provide
hooks to setup scripts to be run on every pull request (where say, one could call a simple static code analyser script)
and a provision to reject the pull request, based on the results from that
script run via pull request hook.
Am trying to setup a pre-screener mechanism to catch trivial bugs/mistakes so that the reviewers are not bothered about trivial mistakes and they could focus more on the logic/feature. And if the prescreening script finds that the source in question doesn't fit the norms (typically, when even the simplest of checks fail; e.g, a function with >5000 SLoC, or unsafe strcpy(), or inclusion of deprecated header files etc), it should return a failure and pull request itself should fail unless the minimum gating criteria is met.
Since the code is on github rather than a local server, this seems to be kinda tricky.
I got a couple of pointers (here, and here) but still couldn't gather the details fully. The codebase consists of multiple repositories on github. Is there a better way to achieve and accomplish this? Please share your thoughts on possible approaches. Thanks!
Does GitHub provide hooks to setup scripts to be run on every pull request (where say, one could call a simple static code analyser script) and a provision to reject the pull request, based on the results from that script run via pull request hook.
This should be achievable through the GitHub Apis, by combining the creation of a hook for events of type pull_request, and then decorating the commits with a resulting status.
This is quite a low-level approach, but this lets you keep a complete control over what is being done. For instance, you could automatically add comments to the Pull Requests or even close them if they do not pass the analysis process.
Another higher level approach would be to leverage the Travis CI services through the addition of a .travis.yml file in your repository. Travis is free for opensource projects and also offers paid services for private repositories.
Setting up Travis is quite easy and tweaking the build script is a breeze.
Below two sample Travis scripts for your inspiration:
LibGit2: a C library. Build with several compilers, run tests, run Valgrind. The build fails (and the PR is decorated as such) when the code doesn't compile or upon a test failure.
LibGit2Sharp: A C# binding for LibGit2. Build against Mono Xbuild compiler, run tests. The build fails (and the PR is decorated as such) when the code doesn't compile or upon a test failure.
The official announcement for the GitHub Commit Status services can be read in this blog post.
You may have use for this:
https://github.com/tomasbjerre/violation-comments-to-github-lib
It will parse the file system to find report files from static code analyzers, and then use those to comment the pull request in GitHub.

What is a good tool for Build Pipelines?

I need a tool that will graphically represent our build pipeline. The below screenshots of ThoughtWorks Go and the Jenkins Pipeline plugin illustrate almost exactly what I want it to look like.
The problem is that we already use Jenkins for our builds and deployments, along with a few other custom tools for orchestration type duties. We don't want a pipeline tool to do the builds or deployments itself, it just needs to invoke Jenkins! I tried out Go, and the first thing it asked for is where my source code is and how to build it. I couldn't get Go to work in a way where Jenkins does the builds but Go creates the pipeline.
I've also experimented with the Jenkins Pipeline plugin, but it's very limiting. For one, it doesn't work with the Join plugin (so we can't have jobs run in parallel, which is a requirement). It also assumes that all of our tasks happen in Jenkins (Jenkins can't see outside of our test lab and into our production environment). I don't know if this is a viable option either.
So, does anyone have any recommendation for some pipeline tools that will do what I'm looking for?
Edit (03/2018)
Since writing this question in 2012 and answering it in 2014, numerous tools have come online to support what I originally wanted. Jenkins now supports scripted pipelines natively and has an excellent UI (Blue Ocean) for rendering them. Those stumbling on this question should consider using these for their pipeline needs.
https://jenkins.io/doc/book/pipeline/
https://jenkins.io/projects/blueocean/
End edit
(Old answer)
It didn't exist when I asked the question, but Jenkins' Build Flow Plugin does exactly what I needed, and creates pipeline views very well.
https://wiki.jenkins-ci.org/display/JENKINS/Build+Flow+Plugin
Jenkins/Hudson can certainly be used to achieve a real pipeline.
You could use Go if you used a dummy material (an empty git repo, for example), and then used the API to trigger a pipeline and upload artifacts.
But, that's quite some effort, and you should probably only do that if you have a very good reason otherwise to use Go.
You can try with GoCD pipeline. It has very nice features for continuous delivery and has nice dashboard also which shows real time flow and status. Give a try.