I have a release pipeline composed of several stages
The tfs server have only one worker.
When i start a release, the worker run stages randomly.
The problem is : if someone else start another pipeline, sometimes the worker take the other pipeline before getting back to the next stage.
Is there a way to lock the worker on the entire release pipeline ?
When running a pipeline, a job is the unit of scale. This means each job can potentially run on a different agent.
Each job runs on an agent. A job represents an execution boundary of a set of steps. All of the steps run together on the same agent.
Next to that,
A stage is a logical boundary in the pipeline. It can be used to mark separation of concerns (for example, Build, QA, and production).
More information: Azure Pipelines - Key concepts.
You should also have a look at the Pipeline run sequence. It clearly explains how the entire pipeline-process works.
Whenever Azure Pipelines needs to run a job, it will ask the pool for an agent.
If the availability of the agent is the issue, you might want to add more agents to the pool. If needed, you can even run multiple agents on the same machine.
Although multiple agents can be installed per machine, we strongly suggest to only install one agent per machine. Installing two or more agents may adversely affect performance and the result of your pipelines.
Related
We are running into the following issue:
We have a job in our pipeline that runs tests. The number of tests need to be distributed over 4 agents to run optimal. It can happen that only one agent is available and the job will start to run all the load on that specific agent, which can then time-out because it takes too long for other agents to become available in time to share in the load.
In essence, if we run with 4 agents, the job will run with optimal efficiency.
My question: is it possible to let a job wait for a specific number of agents to become available before starting the tasks in the job?
That`s not possible through out-of-box features.... But you may create a simple PowerShell script that will query your agents statuses: https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/agents/list?view=azure-devops-rest-7.1
and use includeAssignedRequest
GET https://dev.azure.com/{organization}/_apis/distributedtask/pools/{poolId}/agents?includeAssignedRequest={includeAssignedRequest}&api-version=7.1-preview.1
if you see assignedRequest, your build agent is busy...
We're using SonarQube for tests, and there's one token it uses, as long as one pipeline is running, it goes fine, but if I run different pipelines (all of them have E2E tests as final jobs), they all fail, because they keep calling a token that expires as soon as its used by one pipeline (job). Would it be possible to have -all- pipelines pause at job "x" if they detect some pipeline running job "x" already? The jobs have same names across all pipelines. Yes, I know this is solved by just running one pipeline at a time, but that's not what my devs wanna do.
The best way to make jobs run one by one is set demands for your agent job to run on a specific self-hosted agent. Just as below, set a user-defined capabilities for the self-hosted agent and then require run on the agent by setting demands in agent job.
In this way, agent jobs will only run on this agent. Build will run one by one until the previous one complete.
Besides, you could control if a job should run by defining approvals and checks. By using Invoke REST API check to make a call to a REST API such as Gets a list of builds, and define the success criteria as build count is zero, then, next build starts.
I am using Windows Self hosted agent for my Azure DevOps pipelines. Currently the pipelines are executed sequentially. If more than one pipelines triggered from different ADO projects, then it has to wait in queue to get the agent. In order to execute the pipeline in parallel, I came to know from some tutorials if we increase the paid parallel jobs for self hosted agent under billing section of Organization setting. Is my understanding correct? If so what are the precautionary steps I need to take. Do we have any control of when the pipelines to be executed in parallel?
Thanks.
In order to run self-hosted parallel jobs, you need to purchase parallel jobs and register several self-hosted agents.
For parallel jobs, you can register any number of self-hosted agents in your organization. If you want to run 3 jobs in parallel, then you must register at least 3 self-hosted agents in one agent pool. DevOps charges based on the number of jobs you want to run at a time, not the number of agents registered. There are no time limits on self-hosted jobs. For private projects, you can have one job and one additional job for each active Visual Studio Enterprise subscriber who is a member of your organization.
About how to purchase parallel jobs, please refer to Buy parallel jobs.
For how to control the use of parallel jobs, please refer to the following:
For classic pipeline, you can specify when to run the job through dependencies and Run this job in Additional options in the agent job. Then the pipeline will run in sequence according to your settings.
For YAML pipeline, you can specify the conditions under which the job should run with "dependsOn" and "condition".
For example:
For more info about conditions, please refer to Specify conditions
If you don't specify a specific order, the jobs will run in parallel based on the parallel jobs you purchased.
I don't know if my experience can help. I'll try. I started a new job and we use self-hosted TFS / Azure DevOps. I am changing our build process to create 3 product SKUs (it uses conditional compilation). Let's call them Good, Better & Best.
I edited the Build definition. First I switched to the Variables tab. I created a Process variable named SKUs and set it to Good,Better,Best. The commas are important.
Next I switched to the Tasks tab. I located the Agent Phase. Mine was called Phase 1. Select it. On the right, under Parallelism, I selected Multi-configuration. In the Multipliers text field I entered SKUs. I set Maximum number of agents to 3.
What I don't yet know is the TFS back-end administration and options that the company purchased beforehand.
I have the following question on how jobs are scheduled onto agents in an Agent pool.
AzDO Job Scheduling on Agents
This pertains to HOW the AzDO pipeline decides to pick which of the agents from the pool to run jobs.
The expectation is that jobs will be evenly distributed across the agents in the pool. However, we are noticing that only one of the agents is repeatedly the target of job executions, and this is skewing up the agent usage and rest of the agents are idling, while jobs are waiting.
I examined if there are any demand/capabilities placed on the agents and there are none.
Questions: -
What is the algorithm or job scheduling policy used to pick the
agents? Is there any default stickiness once the job starts landing
in an agent, meaning once an agent is selected from a pool then
subsequent jobs get sticky to the same agent?
Why is only a single agent out of multiple agents in a pool getting used, while rest of agents are idling.
ADO does not pick an agent. The agents "ask" ADO if there is new work for them: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser#communication-with-azure-pipelines
You mention "jobs". I'm not sure if you mean the technical term of an ADO job. If so: Jobs belong to a stage. An entire stage will always be executed on the same agents. Subsequent stages might be running on different agents.
I assume you are not using "Capabilities"?! Otherwise that might explain the behavior that you are seeing.
We have a cloud full of self-hosted Azure Agents running on custom AMIs. In some cases, I have some cleanup operations which I'd really like to do either before or after a job runs on the machine, but I don't want the developer waiting for the job to wait either at the beginning or the end of the job (which holds up other stages).
What I'd really like is to have the Azure Agent itself say "after this job finishes, I will run a set of custom scripts that will prepare for the next job, and I won't accept more work until that set of scripts is done".
In a pinch, maybe just a "cooldown" setting would work -- wait 30 seconds before accepting another job. (Then at least a job could trigger some background work before finishing.)
Has anyone had experience with this, or knows of a workable solution?
I suggest three solutions
Create another pipeline to run the clean up tasks on agents - you can also add demand for specific agents (See here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops&tabs=yaml) by Agent.Name -equals [Your Agent Name]. You can set frequency to minutes, hours, as you like using cron pattern. As while this pipeline will be running and taking up the agent, the agent being cleaned will not be available for other jobs. Do note that you can trigger this pipeline from another pipeline, but if both are using the same agents - they can just get deadlocked.
Create a template containing scripts tasks having all clean up logic and use it at the end of every job (which you have discounted).
Rather than using static VM's for agent hosting, use Azure scaleset for Self hosted agents - everytime agents are scaled down they are gone and when scaled up they start fresh. This saves a lot of money in terms of sitting agents not doing anything when no one is working. We use this option and went away from static agents. We have also used packer to create the VM image/vhd overnight to update it with patches, softwares required, and docker images cached.
ref: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/scale-set-agents?view=azure-devops
For those discovering this question, there's a much better way: run your self-hosted agent with the --once flag, documented here.
You'll need to wrap it in your own bash script, but something like this works:
while :
do
echo "Performing pre-job setup..."
echo "Waiting for job..."
./run.sh --once
echo "Cleaning up..."
sleep 2
done
Another option would be to use a ScaleSet VM setup which preps a new agent for each run and discards the VM when the run is done. It can prepare new VMs in the background while the job is running.
And I suspect you could implement your own IMaintenanceProvirer..
https://github.com/microsoft/azure-pipelines-agent/blob/master/src/Agent.Worker/Maintenance/MaintenanceJobExtension.cs#L53