Have an optional status check enabled by default in Azure Pipelines - azure-devops

We use Azure Pipelines and our branch policy is setup to check our SonarCloud Quality Gate via the status check (https://learn.microsoft.com/en-us/azure/devops/repos/git/pr-status-policy?view=azure-devops).
This works fine so far, as we can see the results as part of the PR:
We want to continuously improve our code, however, as we're working with an old code base there are sometimes changes that violate the Quality Gate. As an example, we'd like a certain % of code coverage via tests, but for some changes, this might not be possible due to the change being in an older part of the application (please note that I'm aware that this might not be a good reason to skip the code, but this is not what will answer my question ;-)
Due to this, we can't make the status check "required" as there are situations where we want to override it. I was wondering if we can make the status check optional, but "enabled" by default. What I mean is that it appears as failed check and will block completion by default, but allows me to make it optional in some cases.
Right now I can do this by setting the status check as optional, and then I have to manually make it "required":
Then it will be shown like this in the PR:
So essentially I'm wondering if I can somehow have the manual step from "Optional" to "Required" done automatically, without losing the choice to make it optional?
Anyone has an idea if (and how) this would be possible?

Related

Hide Github Action when completed on Pull Request

I have a GitHub action that evaluates something and then creates a Status Check with the result.
setup is the one that evaluates and linting is the result (a different status check with all the information). This is done because the default status check is only the logs, but if I create a second one, I can format it with Markdown.
The problem I have is that, when the action is executed to the same commit (because the Pull Request was modified by changing title, reviewers, etc) the action is executed again, which is intended, but creates a second setup check that doesn't disappear. This will accumulate for as many modifications I make.
The old linting status check, instead, will be replaced by the new one, so I don't have any problem with that.
Is there any way to hide the setup check once it's completed? Or to completely hide it? I would prefer to show it while it's running, but once it's finished it should hide itself to keep the PR clean.
This appears to be a bug. Your best plan of action is to contact support#github.com regarding this bug. Make sure you mention how to reproduce this bug (or even link them to this SO post).
As this doesn't appear to have any adverse effects (other than potentially cluttering the screen), it shouldn't be that big of a problem, but I'd definitely reach out to Github.

Spacing characters during Pull Request code review

So I am using github pull requests for my code review needs and my only issue is that I cannot tell whether a person is using tabs or spaces for indentation. We have a standard here on this and you can fail code review for using the wrong one. Is there a way to tell which they are using with github or will I have to manually open up the file in my editor to tell the difference?
Is there a way to tell which they are using with github or will I have to manually open up the file in my editor to tell the difference?
Ideally – neither!
Whenever things can be checked in an automated way, let the computer do the work for you. Checking proper usage of whitespace among many other static rules can be checked with a variety of tools, often called linters. This highly depends on what language your project uses. Of course you can also write your own scrips if you so choose.
What you can do on Github is connect your repository to a CI tool such as Travis. This lets you automatically build all pull requests and check things such as whitespace rules. It also lets you run test suites, code formatting, … – anything you can automate, you can (and should!) run from there to minimize manual work.

Move a jira issue between workflows

I have an issue in JIRA that is following one workflow, workflow looks something like this (for bugs):
New -> Eval -> Approve -> Roadmap/Schedule -> Dev -> Complete
This workflow is for issue type "Bug".
For higher priority bugs, I want a totally different workflow, and for it to have its own issue type, for instance "Priority Bug".
R&D -> Dev -> Release -> Complete
This works great, for new Priority Bugs, but I have a transition, that allows you to promote a normal bug to a priority bug. That transition changes the issue type to Priority Bug properly, but when it gets there, it seems lost, its now not in either workflow. How do I get it to change workflows as well?
I suggest to install Script Runner plugin, and implement this as a single workflow. You can have custom scripted conditions that will check the type of the issue and allow and disallow your transitions thus emulating like it is a different workflow for another issue type.
Workflows are for a given project and for a given issue type. If the bug is in a particular project and of a particular issue type then it will follow the corresponding workflow.
However, when moving from one workflow to another there needs to be some way for JIRA to know what state to go to. When you do a manual move you get a prompt that allows you to decide what state in the new workflow it has. I have never done an automatic move triggered by a transition, but I suspect it will have a problem determining what state the issue should have. Perhaps you could set the state as a part of the transition?

What are the best practices for defining workflow for documentation and localization tasks in JIRA?

Almost every issue needs a documentation subtask.
Many issues need a localization subtask.
Should documentation and localization issues have their own workflow (by issue type)?
Should each project have documentation and localization components, so that those issues would be automatically assigned to the component owner?
Should "Create Issue" screen have a checkbox "Needs documentation", which would create a documentation subtask with specific fields?
I agree, only create issues or subissues for work that occurs in parallel and needs a different assignee or due date or such. If the work happens in every case then build it into the workflow instead.
The other thing to be aware of is that whatever is entered in a field such as "Notes for Documentation" always needs human review before being sent to a customer.
I have a different understanding for working with JIRA. For me, JIRA helps me understand when I have to do something, what the status is, and when I'm done with it. When documentation and localization is every time necessary, I would not like to overflow my JIRA with all these sub-tasks everyone understands that they are necessary.
The only reasons, why these should be extracted as sub-tasks (and managed with a workflow) are:
You have normally a different assignee for documentation and / or localization, so you are able to assign that to a different person.
Their workflow is different, therefore you want to manage it separately.
I would not expect additional components for them, because when you want to isolate them, just have a custom field for it (or create a sub-task like work item for documentation and localization, don't know if that is easily possible).
I would not add that "Needs documentation" flag, because people that add issues often don't know if documentation is necessary. So no, I would not add them as sub-tasks, neither as additional flag, but explain everyone that documentation and localization is necessary. There should be ways to check that automatically, without any issue ...
And of course, you are free to add a documentation screen with additional custom fields. There you could document in implementing the issue what you have done about documentation and internationalization. So checking that these are empty, would replace the flag you have mentioned.

TFS 2008: Questions about auto Builds, Labels and general versioning

a bit of background first...
I am setting up a versioning numbering system for our project which currently only has a development branch, but we are now moving towards our first deployment. We are using TFS and we use nightly builds on our dev branch.
The way we are probably going to go with this is that when we get ready for a release we take a branch off dev and call it 1.x. This shall be a test branch: we test it, fix it (then merge back to dev), test it some more then when it is all good we take another branch off the 1.x branch and call it 1.0. It is this branch that gets deployed to production. Any fixes in production will be made to the 1.x, tested and then a new branch 1.1 will be made.
My issue is with the testing of the 1.x branch. Before testing the branch will be locked for obvious reasons. My issue is that QA requires that a round of testing be conducted against a "version number" and if testing fails the next round of testing will be against a new "version number." Us developers want to tie the "version number" to the release and testing can iterate over that version...so there is a conflict here.
My first thought is to use the build number as the point in time that the code is tested against. When its time to submit a new version for testing, the 1.x branch is locked again and a build is kicked off and the VSTS number that is generated becomes a "release condidate 1 for v1.0." Mapping the RC to a build we can do manually in a spreadsheet...
...then someone mentions labels, and that the code should be locked labled and built prior to testing. I have never used labelling before and have just read that build itself creates a label in TFS.
I am now confused about what is the best way to go here. Is using a build number for a release candidate enough? Does manually labelling serve any purpose here (the only benefit I can see is that we can give it out own name and description)? Can I tell TFS NOT to create a label whenever it runs a build and just do all our own labelling at significant points in time (not every build is going to be a release candidate for example)? If so, is NOT creating a label after each build a bad idea, what does labelling give me?
I guess I am confused about where changsets, build numbers/names and labels all fit in with one another...
This is a broad question but its one of those ones where I am not 100% sure what to ask. Any help appreciated.
...then someone mentions labels, and that the code should be locked labled and built prior to testing. I have never used labelling before and have just read that build itself creates a label in TFS.
What you have read is correct. With TFS (unlike, say, SourceSafe), every server action constitutes a 'known point in time' which can later be referred to. You can see what I mean by doing a Get Specific Version... and looking in the Version dropdown: in TFS 2005 the relevant ones I see are Changeset, Date, Label. Now, as you correctly say, every build automatically creates a label. This means that at any future time it will be possible to retrieve the code exactly as it was after any given changeset; at any given date; and when any specific label was applied, thus including when any build was done.
The upshot is that you can use your own labels or not, entirely at your own discretion - the ability to retrieve a given snapshot of the code will be there anyway. I wouldn't suggest trying to inhibit TFS from producing a label for each build (I don't know if this is even possible) - labels cost you nothing.
Your branch 1.x is a consolidation branch which will contain many incremental small evolutions.
Locking the branch is not the answer.
Setting a label (specifically named "for QA test", and locked in order to not be able to move that label, is the usual way to signal the QA team they can build their own workspace and retrieve that exact label.
Then they can begin their tests against the code.
Creating a label after each build is not always practical, since not every build is meant to be tested by QA.