Release pipeline - manual status override - azure-devops

I've got Azure DevOps release pipeline for our web apps with two stages:
staging (production slot) and production.
Both of those stages have a task to run integration tests against those environments as the final step.
Sometimes, due to the nature of the live environment (data), some of the tests may fail. That doesn't mean there's an issue with the app.
Once that happens, the release to that environment is marked as Failed (rejected), which is correct from process point of view, but then manual inspection can reveal that it was only a data issue, thus it's ok to keep it.
Is there any way to manually change the status (via GUI or API)? So it no longer appears as Failed on the dashboard.
I can't find anything in the GUI. I was able to find Manual Interventions in the API (https://learn.microsoft.com/en-us/rest/api/azure/devops/release/manual%20interventions?view=azure-devops-rest-5.1), but without any details on what it does.
Redeployment is not guaranteed to be successful from data perspective hence the need to manually override the status.
Edit:
This is what I'm currently getting when it fails, but I'd like to be able to turn the red stage into green.

You can try using option "Trigger even when the selected stages partially succeed" that is available in the pre-deployment condition.
Then I saw option to manually deploy the prod environement.

Related

Triggers for build completion not kicking off pipeline

Hoping someone could help me out with an issue I'm running into. I have 4 different pipelines set up with the first triggering the second upon build completion and so on down the line. The triggers are not kicking off after the previous pipeline steps build completion as they are supposed to do so. THey're also all on the same branch so i'm at a loss as to what to do. Any ideas? Classic pipeline not a YAML
First, you need to make sure that your MPV Automated Testing Step 1 pipeline runs successfully, because a failed run will not trigger the Build completion trigger.
I tested two pipelines on the same branch. On my side, build completion trigger works well.
In addition, there is a recently event of availability degradation of Azure DevOps, which could affected these services, and it has been resolved. If you want to know more information, please click here. You can try again to see if the problem still exists.

DevOps Provision Azure Resources step is clearing my bot code if release aborted

I have a pipeline built where, simplified, it deploys a chatbot to a QA Bot Service and then, with a pre-deployment approval, to a PROD Bot Service. Normally it works fine. However, if I do not approve the release, my PROD Bot Service gets wiped (the project files are gone). I tried moving the approval to a post-deployment approval on the QA Bot Deployment, and I have the same exact issue. So my question is, why are my PROD Bot Service files being affected if that step is never being run? I need to be able to cancel the release without impacting the existing production code!
Edit: Updated with additional context. I have determined the issue is happening at provision Azure resources step. Somehow that is causing the code to clear out, before I get to ANY bot service deployment steps. Updated title as well to match issue.
I figured it out. When you set the app service configuration settings, everything is cleared out except the settings you put in (annoying, but that's a whole other issue...). My issue was that I didn't have the WEBSITE_RUN_FROM_PACKAGE = 1 setting in the ARM template. Apparently, if you do not set this value, your bot code (if previously deployed from package via DevOps) will be cleared out. I never noticed this because if you finish the release, this value will be set by your Bot Service Deployment action, and we hadn't cancelled any releases (or had immediately redeployed).
In short, WEBSITE_RUN_FROM_PACKAGE = 1 is required in the ARM template if you are setting any other app settings. Otherwise, bot code previously deployed by DevOps package will be cleared as soon as the provisioning step is completed. Adding this value fixed the issue.

Azure DevOps Release Pipelines: Letting release flow through multiple environments with manual triggers

I'm trying to configure Azure DevOps Release pipelines for our projects, and I have a pretty clear picture of what I want to achieve, but I'm only getting almost all the way there.
Here's what I'd like:
The build pipeline for each respective project outputs, as artifacts, all the things needed to deploy that version into any environment.
The release pipeline automatically deploys to the first environment ("dev" in our case) on each successful build, including PR builds.
For each successive environment, the release must have been deployed successfully to all previous environments. In other words, in order to deploy to the second environment ("st") it must have been deployed to the first one ("dev"), and in order to deploy to the third ("at") it must have been successfully deployed to all previous (both "dev" and "st"), etc.
All environments can have specific requirements on from what branches deployable artifacts must have been built; e.g. only artifacts built from master can be deployed to "at" and "prod".
Each successive deploy to any environment after the first one is triggered manually, by someone on a list of approvers. The list of approvers differs between environments.
The only way I've found to sort-of get all of the above working at the same time, is to automatically trigger the next environment after a successful deployment, and add a pre-deployment gate with a manual approval step. This works, except the manual approval doesn't trigger the deployment per se, but rather let an already triggered deployment start executing. This means that any release that's not approved for lifting into the next environment, is left hanging until manually dismissed.
I can avoid that by having a manual trigger instead of automatic, but then I can't enforce the flow from one environment to the next (it's e.g. possible to deploy to "prod" without waiting for successful deployments to the previous stages).
Is there any way to configure Azure DevOps Release Pipelines to do all of the things I've outlined above at once?
I think you are correct, you can only achieve that by setting automatic releases after successful release with approval gates. I dont see any other options with currect Azure DevOps capabilities.
Manual with approval gates doesnt check previous environments were successfully deployed to, unfortunately.
I hope this provides some clarity after the fact. Have you looked at YAML Pipelines In this you can specify the conditions on each stage
The stages can then have approvals on them as well.

TFS Release Management 2015 - How to restrict environment deployment order

Quick question.
Is there a way to constrain/restrict what order users can can deploy builds to environments?
For example if I have these four environments configured with manual push-button deploy (not-automated) I can start all four together if I want. I don't have to wait for the other to be done before kicking off the next one:
DEV
TEST
STAGE
PROD
Microsoft seems to be missing this feature in TFS 2015. It would make sense to offer a deployment condition that states that previous environments must have successful deployments before you can run push-button deploy for the next.
Yes, I know, you are going to say "but you can automate that so the deploys run in the order you want." Management here does NOT want that. They want push button deployment for each environment WITH a constraint that previous environments must be completed first.
This means a manual start for each environment.
Other than having the release manager "eyeball" the situation before pushing the button for the next environment I can't see a way to configure this rule.
Any ideas?
There is not any restriction on manually deploy situation for now. This is designed for giving you the ability to override the release process.
Note that you can always deploy a release directly to any of the
environments in your release definition by selecting the Deploy
action when you create a new release.
In this case, the environment triggers you configure, such as a
trigger on successful deployment to another environment, do not apply.
The deployment occurs irrespective of these settings. This gives you the ability to override the release process. Performing such
direct deployments requires the Manage deployments permission, which
should only be given to selected and approved users.
Source Link: Environment triggers
Suggest you use automation triggers, you could use Parallel forked and joined deployments, in combination with the ability to define pre- and post-deployment approvals, this enables the configuration of complex and fully managed deployment pipelines to suit almost any release scenario.
If you insist on manual push-button deploy, you may have to ask the release manager "eyeball" the situation to restrict environment deployment order as you mentioned.

Octopus deployment caching

We are using octopus to deploy our project. A bunch of steps which gets executed during the deployment. One of them is a powershell script and that powershell script is a work in progress.
However to test the script we have to perform a dummy check in or can create a new release in octopus after we change the build powershell script step, and it will pick up the build steps straight away and does not cache, else the script which gets executed is the previous version.
I do not know if this is caching or some other issue. I think this is some kind of issue with octopus or setting which I am missing.
Please help.
An important aspect of deployment automation is ensuring that deployments are repeated exactly each time they run.
When you create a release in Octopus Deploy, the artefacts, process, and variables are all "locked in" for that release. This means no matter what changes you make, for the lifetime of that release it will be performed identically every time.
If your deployment tool didn't do this, the same relase could work in your test environment, but then fail in the live environment because the deployment process changed in some way.
In effect, you release changes to the deployment process in the same way you release changes to the application itself.
This is why you need to create a new release in Octopus Deploy in order to see the changes you make.
This is both a blessing and a curse... On the one hand - your existing release scheduled for Production is protected from changes being made in lower environments. On the other hand - you are forced to recreate a release if you need to make a slight process change mid-cycle. This is arguable the correct approach since you would want to test any changes - but maybe not relevant if your changes can only be tested in higher environments (e.g maybe only Production is load balanced).
The software does allow to update Variables mid-cycle, but not Process Steps. I believe this feature is been requested for a future release.
http://help.octopusdeploy.com/discussions/questions/5130-how-to-update-a-single-variable-in-an-existing-release