Change the target VM ScaleSet in an existing and running Azure DevOps Agent Pool? - azure-devops

Cheers!
Maybe some of you already have done something similar.
We created a dedicated, self hosted AZ DevOps Agent Pool in one of our subscriptions with terraform.
So terraform being terraform and DevOps doing its magic with the agent pools, any major update on the scale set for now results to a recreation of the scale set with corresponding downtime. We know about the necessary ignore_changes lifecycle changes which would probably prevent that, but they are not yet implemented.
So my question is: has anyone experience how AZ DevOps reacts when you change the target Scale Set of a running Agent Pool?
Meaning just changing the target ScaleSet via the Azure DevOps Portal.
A little downtime is fine with we but we would really love to being able to deploy the new infra running parallel to the old agent set and then switch via the portal. Like a standard Blue/Green deployment scheme.
Also having a fallback to the old agent pool would be a major bonus.
As long as an Agent Pools doesn't support more than 1 scale sets that seemed to be the most viable solution.
Anyone here ever tried anything like this?
Thanks!

To answer my own question:
We just pulled the plug and switched over to a new Scale Set.
The downtime is immediately, because DevOps scales the "old" scale set to 0 right after.
After approximately 10-15 minutes Azure DevOps started to scale the new instances up and added them to the agent pool.
So in a nutshell: Blue/Green deployment of the Scale Set worked basically. You can schedule new jobs while the agents are down but running at the time of the switch are terminated as the instances are deleted right away

Related

Build agent metric in Azure Devops pipelines

We pay for a number of Microsoft hosted build agents in Azure pipelines. We have a lot of build pipelines, where many of them do jobs in parallel.
Are there any metrics I can use to see the utilization of the build agents and even more interesting, how many jobs are in queue for a free build agent?
Since this would be for the whole Azure Devops instance the Dashboard feature doesn't seems to be appropriate because it only seems to hold project specific metrics.
Got to your Organization Settings-Parallel jobs blade. This will give you the ability to view the jobs in progress.
As for metrics there is a public preview just came out for this; however, I do not have it available yet.
Agent pool usage data is sampled and aggregated by the Analytics service every 10 mins. The number of jobs is plotted based on the max number of running jobs for the specified interval of time.
This feature is enabled by default. To try it out, follow the guidance
below.
Within project settings, navigate to the pipelines “Agent pools” tab
From the agent pool, select a pool (e.g., Azure Pipelines) Within the
pool, select the “Analytics” tab

Map service roles and replicas to servers with Azure DevOps Release

My project is Windows Service application which could be installed in several roles (the difference is in service name, exe path and some setting in app.config). Each role could be scaled horizontally by instances count. And all these {roles x replica counts} should be deployed over a set of servers in specific proportions for effective performance and utilization.
As an example:
ServerA
ServiceAlfa.1
ServiceAlfa.2
ServiceBravo
ServiceDelta
ServerB
ServiceBravo
ServiceCharlie
ServiceDelta.1
ServiceDelta.2
ServiceDelta.3
How can I achieve this with Azure DevOps (Dev17.M153.5) instruments?
I know brand new yaml pipeline introduces some conception of Environments and VM. It's just not available in latest stable version yet. But it's like a replacement for Deployment Groups early used for deployment to multiple machines, which I can use. I have already installed deployment agents and registered it. But I still cannot figure it out how better configure my complex mapping of instances to servers in release pipeline.
I can create a one job stage per role and link them with corresponding variable groups like
StageAlfa
ServerA:2
StageBravo
ServerA:1
ServerB:1
StageCharlie
ServerB:1
StageDelta
ServerA:1
ServerB:3
So I should check and compare the server name in my script
Or I can do the opposite: create a stage per machine and link it with variable group describing count of specific role replicas on current server. So in every stage I could select specific machine from deployment group by tag.
Looks like the second approach is simpler but they both are felt so awkward!
P.S. Windows Services on Machines not a containers in Kubernetes due to specific Windows software dependencies.
Your approaches are correct. You may consider migrating to Azure DevOps Service or upgrade to Azure DevOps Server 2020, which supports Envorinments and VM:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops#continuous-deployment-in-yaml

Running maintenance on self-hosted Azure DevOps agents

I have several self-hosted Azure DevOps agents (each installed on a dedicated on-prem server) and I need to perform reoccurring maintenance on them (i.e. patching, etc.). Is there's a good way to define those maintenance windows within Azure DevOps so that server admins could do their job without worrying to interrupt any ongoing build/release task?
There seem to be a setting related to configuring reoccurring maintenance (Organization Settings -> Agent Pools -> <Pool Name>-> Settings [tab]) but it seems as if it would apply to the whole pool and it's hard to tell which of the agents will be considered offline at which time slot.
Unfortunately I couldn't find any documentation about it and not sure if there's something that Azure DevOps would also be doing on the agent machines (i.e. running cleanup, updating agents and so on)
Currently, the process involves a person with admin permissions in Azure DevOps to disable an agent allowing a server admin to perform regular maintenance and to re-enable it back when server admin is done. It would be great if a server admin could not involve an Azure DevOps Admin every time for such routines.
Due to the fact that you have your own Azure Pipelines agents, then the maintenance should be easier and you will have total control of either having automatic maintenance or not. If you use Microsoft's hosted agents, you could not update the hosted agents from Microsoft because these agents are maintained by Microsoft exclusively.
The best way to do this is by having more than one agent on one machine instance then organize the agents on one pool. If you have multiple pools, then you can configure Azure DevOps to have different maintenance window schedule on each pool to have different time, and give some time to download and configure itself.
For example, I usually configure the maintenance window on weekend days such as Sunday early morning once a month on certain date. And for any pools I have I gave them intervals of 40 minutes on each pool to have maintenance to give enough time for the agent to download, update and restart itself.
Please consult these documentation further for detailed explanation and use cases:
For Azure DevOps Server:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops-2019
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops-2019
For Azure DevOps Service (on cloud TFS, formerly Visual Studio Team Services):
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops

Can one private agent from VSTS be installed on multiple VM's?

I have installed agent on VM and configured a CI build pipeline. The pipeline is triggered and works perfectly fine.
Now I want to use same build pipeline, same agent, but different VM. Is this possible?
How will the execution happen for builds and on which VM will the source be copied?
Thank you.
Like the others I'm also not sure what you're trying to do and also think that the same agent across multiple machines is not possible.
But if you have to alternate or choose easily between VMs, you could set up for each of your VMs (used for this special scenario) an individual agent queue with one agent in that pool. That way you can choose the agent pool at queue time via the agent queue dropdown field. But that would only work if you're triggering manually, not in a typical CI scenario. In that case you would have to edit the definition to enforce any particular VM each time you want to swap VMs.
NO. These private agents are supposed to have a unique name and are assigned to an Agent Pool/Queue. They are polling up to VSTS/Azure Devops server if they have a job to do. Then they execute it. If you clone a machine with the same private build agent, then theoretically the agent that picks it up will execute the job, but that is theoretic. I really don't know how the Agent Queues will handle this.
It depends on what you want to do.
If you want to spread the workload, like 2 build servers and have builds go to whichever build server isn't busy, then you would create 1 Agent Pool/Queue. Create a Private Agent on one server and register it to that Pool, then on the second server un-register the agent and then re-register the agent add it to the SAME pool.
If you want to do work on 2 servers at the exact same time, like a deployment to 2 servers at the same time, then you would create a 'Deployment Group' and add both servers to that. You would unregister both agents from the Agent Pool/Queue. From your 'Deployment Group' copy the PowerShell script snippet and run it on each machine. This way you can use this in your Release Pipeline and deployments in parallel, which take less time to do deployments.
You could set up a variable in the pipeline so you can specify the name of the VM at build-time.
Also, once you have one or more agents, you would add them to an app pool. When builds are run, it will choose one agent from the pool and use that.

How to auto retry deployments to agents when they come online again (after having been offline)

When using Azure pipelines and deployment groups it is possible to re-deploy the "last successful" release to new agents with given "tags" using the instructions found here:
https://learn.microsoft.com/en-us/azure/devops/release-notes/2018/jul-10-vsts#automatically-deploy-to-new-targets-in-a-deployment-group
My issue is when releasing to a deployment group consisting of 3 machines. 2 are online and 1 is periodically offline. In this situation my release fails when the 1 machine is offline. This would be OK by me if Azure pipelines retried the deployment when machine offline comes back online. I thought this would work in the same way as "new targets", but I still haven't figured out how.
This is just a small test. When going in production my deployment group will consist of hundreds of machines and not all of them will be online at the same time.
So - Is it possible to automate the process to ensure all machines eventually will be up to date when all of them have been online?
Octopus-deploy seems to have this feature
https://help.octopusdeploy.com/discussions/questions/9351-possibility-to-deploy-when-agent-become-online
https://octopus.com/docs/deployment-patterns/elastic-and-transient-environments/deploying-to-transient-targets
Status after failed deployment
(and target is online again)
Well, in general the queued deployments will be automatically triggered once the agent is online. But for the failed deployments you have to re-deploy them manually. No any way to retry it automatically when the agent is online again...
Based on my test, to redeploy to all "not-updated-agents", you have to remove the other target machines which passed in previous deployment from deployment group...