Azure Devops - Schedule agent update - azure-devops

We are running UI tests using Azure Interactive agent but sometime agent update's affects our test run. Can we schedule the agent update to happen on specific time.
Thought scheduling Maintenance for agents would solve this but no luck.

Sorry, we do not have this kind of build-in feature to schedule agent update.
If you do not want agent to update. You could try to set agent.disableupdate=true and check if this do the trick.
But if the server sends the update request and awaits the agent to update before considering the agent available.
This is an expected behavior, the agent only updates itself when it must. When you use a feature on the service that requires a new agent.
You could also take a look at this similar question here-- Prevent agents automatically updating

Related

Azure DevOps Agent - Custom Setup/Teardown Operations

We have a cloud full of self-hosted Azure Agents running on custom AMIs. In some cases, I have some cleanup operations which I'd really like to do either before or after a job runs on the machine, but I don't want the developer waiting for the job to wait either at the beginning or the end of the job (which holds up other stages).
What I'd really like is to have the Azure Agent itself say "after this job finishes, I will run a set of custom scripts that will prepare for the next job, and I won't accept more work until that set of scripts is done".
In a pinch, maybe just a "cooldown" setting would work -- wait 30 seconds before accepting another job. (Then at least a job could trigger some background work before finishing.)
Has anyone had experience with this, or knows of a workable solution?
I suggest three solutions
Create another pipeline to run the clean up tasks on agents - you can also add demand for specific agents (See here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) by Agent.Name -equals [Your Agent Name]. You can set frequency to minutes, hours, as you like using cron pattern. As while this pipeline will be running and taking up the agent, the agent being cleaned will not be available for other jobs. Do note that you can trigger this pipeline from another pipeline, but if both are using the same agents - they can just get deadlocked.
Create a template containing scripts tasks having all clean up logic and use it at the end of every job (which you have discounted).
Rather than using static VM's for agent hosting, use Azure scaleset for Self hosted agents - everytime agents are scaled down they are gone and when scaled up they start fresh. This saves a lot of money in terms of sitting agents not doing anything when no one is working. We use this option and went away from static agents. We have also used packer to create the VM image/vhd overnight to update it with patches, softwares required, and docker images cached.
ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops
For those discovering this question, there's a much better way: run your self-hosted agent with the --once flag, documented here.
You'll need to wrap it in your own bash script, but something like this works:
while :
do
echo "Performing pre-job setup..."
echo "Waiting for job..."
./run.sh --once
echo "Cleaning up..."
sleep 2
done
Another option would be to use a ScaleSet VM setup which preps a new agent for each run and discards the VM when the run is done. It can prepare new VMs in the background while the job is running.
And I suspect you could implement your own IMaintenanceProvirer..
https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/Maintenance/MaintenanceJobExtension.cs#L53

Azure worker - Complete the release pipeline

I have a release pipeline composed of several stages
The tfs server have only one worker.
When i start a release, the worker run stages randomly.
The problem is : if someone else start another pipeline, sometimes the worker take the other pipeline before getting back to the next stage.
Is there a way to lock the worker on the entire release pipeline ?
When running a pipeline, a job is the unit of scale. This means each job can potentially run on a different agent.
Each job runs on an agent. A job represents an execution boundary of a set of steps. All of the steps run together on the same agent.
Next to that,
A stage is a logical boundary in the pipeline. It can be used to mark separation of concerns (for example, Build, QA, and production).
More information: Azure Pipelines - Key concepts.
You should also have a look at the Pipeline run sequence. It clearly explains how the entire pipeline-process works.
Whenever Azure Pipelines needs to run a job, it will ask the pool for an agent.
If the availability of the agent is the issue, you might want to add more agents to the pool. If needed, you can even run multiple agents on the same machine.
Although multiple agents can be installed per machine, we strongly suggest to only install one agent per machine. Installing two or more agents may adversely affect performance and the result of your pipelines.

A release pipeline is queued with multiple agents idle

We have an issue where currently we have a single release running and one release queued. Normally this would be ok, but we run self-hosted agents and have currently 9 agents sitting idle to run the release. I keep seeing parallel jobs, but I don't know if that solves this, as this feels more like something that I have misconfigured.
Since this post we have purchased 10 parallel jobs for the self-hosted pool and still are seeing this issue.
In your current situation, we recommend you can try to check your Deployment queue settings, this setting is used to configure actions when multiple releases are queued for deployment. You can select the 'Unlimited':

Timeout for a whole pipeline w/manual steps in Azure DevOps

I have created a release pipeline in Azure DevOps, with several stages, deployments to each environment. On some of the environments (Test and Production), I have manual approval tasks (not set in YAML, but on the environment). If the approval task is not performed within a set time, I want the whole pipeline to cancel.
I have set a timeoutInMinutes on the stage itself, however, the timeout never starts, as the stage is waiting for the approval before it can start at all.
I haven't found a way to set a timeout on the approval/review activity, nor have I found a way to have a different stage/job independent of the others sit and wait for a timeout and cancel the job with a logging command ##vso[task.complete result=Canceled;]DONE
See the screenshot. The pipeline just sits and waits forever. Any ideas?
Timeout for a whole pipeline w/manual steps in Azure DevOps
Yes, you are right. I could reproduced this issue on my side.
As we know, the Timeout is used:
To avoid taking up resources when your job is hung or waiting too
long, it's a good idea to set a limit on how long your job is allowed
to run.
When we set the checks on the stage. Our job is in the pending state not Running, At this time, the timeout we set has not yet started working. It only starts timing after our job starts running. So, we need a timeout for the checks, just like the timeout for Pre-deployment approvals:
I could not find any solution/workaround after spending a long time, but I found this is a highly prioritized feature requirement that has been tracked by the Azure Devops teams after confirming with Azure devops teams.
Now, I could see the status of this feature timeout for the checks request is In Progress at sprint 158,
I believe this feature will meet us soon, please pay attention to the release notes of Azure devops. Thank you for helping us build a better Azure DevOps.
Hope this helps.

Can one private agent from VSTS be installed on multiple VM's?

I have installed agent on VM and configured a CI build pipeline. The pipeline is triggered and works perfectly fine.
Now I want to use same build pipeline, same agent, but different VM. Is this possible?
How will the execution happen for builds and on which VM will the source be copied?
Thank you.
Like the others I'm also not sure what you're trying to do and also think that the same agent across multiple machines is not possible.
But if you have to alternate or choose easily between VMs, you could set up for each of your VMs (used for this special scenario) an individual agent queue with one agent in that pool. That way you can choose the agent pool at queue time via the agent queue dropdown field. But that would only work if you're triggering manually, not in a typical CI scenario. In that case you would have to edit the definition to enforce any particular VM each time you want to swap VMs.
NO. These private agents are supposed to have a unique name and are assigned to an Agent Pool/Queue. They are polling up to VSTS/Azure Devops server if they have a job to do. Then they execute it. If you clone a machine with the same private build agent, then theoretically the agent that picks it up will execute the job, but that is theoretic. I really don't know how the Agent Queues will handle this.
It depends on what you want to do.
If you want to spread the workload, like 2 build servers and have builds go to whichever build server isn't busy, then you would create 1 Agent Pool/Queue. Create a Private Agent on one server and register it to that Pool, then on the second server un-register the agent and then re-register the agent add it to the SAME pool.
If you want to do work on 2 servers at the exact same time, like a deployment to 2 servers at the same time, then you would create a 'Deployment Group' and add both servers to that. You would unregister both agents from the Agent Pool/Queue. From your 'Deployment Group' copy the PowerShell script snippet and run it on each machine. This way you can use this in your Release Pipeline and deployments in parallel, which take less time to do deployments.
You could set up a variable in the pipeline so you can specify the name of the VM at build-time.
Also, once you have one or more agents, you would add them to an app pool. When builds are run, it will choose one agent from the pool and use that.