We have deployed self-hosted agents on Docker Image managed by Azure Container Images. We run release jobs parallelly and hence want to restrict the use of 1 agent for 1 job. The agent should either restart or delete after completion
In the release pipeline, you can set Demand for the job to specify a specific agent to run.
Demands: Specify which capabilities the agent must have to run this pipeline.
Or you can add an additional job, use the hosted agent to run the job, add a powershell task to this job, and delete the self-hosted agent that has finished running the job by calling the Agents-Delete rest api
DELETE https://dev.azure.com/{organization}/_apis/distributedtask/pools/{poolId}/agents/{agentId}?api-version=6.0
Related
We're using SonarQube for tests, and there's one token it uses, as long as one pipeline is running, it goes fine, but if I run different pipelines (all of them have E2E tests as final jobs), they all fail, because they keep calling a token that expires as soon as its used by one pipeline (job). Would it be possible to have -all- pipelines pause at job "x" if they detect some pipeline running job "x" already? The jobs have same names across all pipelines. Yes, I know this is solved by just running one pipeline at a time, but that's not what my devs wanna do.
The best way to make jobs run one by one is set demands for your agent job to run on a specific self-hosted agent. Just as below, set a user-defined capabilities for the self-hosted agent and then require run on the agent by setting demands in agent job.
In this way, agent jobs will only run on this agent. Build will run one by one until the previous one complete.
Besides, you could control if a job should run by defining approvals and checks. By using Invoke REST API check to make a call to a REST API such as Gets a list of builds, and define the success criteria as build count is zero, then, next build starts.
I find it hard to grasp the concept of deployment jobs and environments in Azure Pipelines. From what I understand, a deployment job is a sequence of steps to deploy your application to a specific environment, hence the environment field.
If so, why is there also a pool definition for agent pool for that job definition?
EDIT
What bothers me is that, from what I understand, an Environment is a collection of resources that you can run your application on. So you'll define some for dev, some for stage, prod, etc. So you want to run the job on these targets. So why do we need to specify an agent pool to run the deployment job on? Shouldn't it run on the resources that belong to the specified environment?
EDIT
Take this pipeline definition for example:
jobs:
# Track deployments on the environment.
- deployment: DeployWeb
displayName: deploy Web App
pool:
vmImage: 'Ubuntu-16.04'
# Creates an environment if it doesn't exist.
environment:
name: 'Stage'
resourceType: VirtualMachine
strategy:
# Default deployment strategy, more coming...
runOnce:
preDeploy:
steps:
- script: echo "Hello"
deploy:
steps:
- checkout: self
- script: echo my first deployment
I have an environment called "Stage" with one virtual machine on it.
When I run it, I can see both jobs run on my VM
The agent pool specified is NOT USED at all.
However, if I target another environment with no machines on it, it will run on Azure Pipelines vm
Why do you specify a pool in a deployment job in Azure Pipelines?
That because the environment is a collection of resources that you can target with deployments from a pipeline.
In other words, it is like the machine that hosts our private agent, but it can now be a virtual environment, like K8s, VM and so on.
When we specify an environment, it only provide us one virtual environment(You can think of it as a machine). However, there is no agent installed on these virtual environments for us to run the pipeline, so we need to specify an agent to run the pipeline.
For example, we execute our pipeline in our local machine, we still need to create our private agent, otherwise, we only have the target environment, but there is no agent that hosts the pipeline running.
The environment field denotes the target environment to where your artifact is deployed. There are commonly multiple environments like through which the artifacts flow, for example development -> test -> production. Azure DevOps uses this field to keep track of what versions are deployed to what environment etc, from the docs:
An environment is a collection of resources that you can target with
deployments from a pipeline. Typical examples of environment names are
Dev, Test, QA, Staging, and Production.
The pool is a reference to the agent pool. The agent is the machine executing the logic inside the job. For example, a deployment job might have several logical steps, such as scripts, file copying etc. All this logic is executed on the agent that comes from the agent pool. From the docs:
To build your code or deploy your software using Azure Pipelines, you
need at least one agent. As you add more code and people, you'll
eventually need more.
When your pipeline runs, the system begins one or more jobs. An agent
is computing infrastructure with installed agent software that runs
one job at a time.
I am having one windows self hosted agent for Azure DevOps pipeline. If we run two pipelines, one has to wait for the other to be completed. Is there any way to do parallelly run the pipelines by doing any configuration in agent?
Check this doc(Self-hosted agent):
For public projects that are self-hosted, you can have unlimited parallel jobs running. For private projects, you can have one job and one additional job for each active Visual Studio Enterprise subscriber who is a member of your organization.
Is there any way to do parallelly run the pipelines
If you are using public project, the number of parallel jobs is Unlimited, if you are using private, the default number of parallel job is one self-hosted job. We need to buy self-hosted parallel jobs, then we could run the pipeline in parallel.
In addition, we could open Organization Settings->Parallel jobs to check the number of parallel jobs, check the pic below:
Buy self-hosted parallel jobs steps:
Open Organization Settings->Billing->set up Billing and buy self-hosted parallel jobs. Check the pic below:
Result:
Note: we need to install another self-hosted agent and then we can run two pipelines at the same time.
Update1
Install another agent, we could install it in the same agent pool or create other agent pools and install the new agent.
Steps:
Open org settings->agent pool->open default agent pool and click the button New agent to download self-hosted agent zip.file->install another agent with the file and enter another agent name, click the pic below.
If you buy more parallel executions you can do that. All you need to do is install another azure devops agent service on the same box and register it.
I'm using Azure Pipeline Agents on Machines and have those Machines in a Deployment Group and I have a DevOps Release which does some things on each machine. If the Azure Pipeline Agent isn't running on a machine at release time, the release will skip over this machine (see below image). How can I know which machines were skipped?
!]1
How can I know which machines were skipped?
The easiest way to check is that you can manually check the detailed deployment log.
For example:
Then you could get the skipped agent name.
On the other hand, you could also use the Rest API : Releases - Get Release. In the API response, you could check the Job Status and the Agent name.
Here is sample:
GET https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?api-version=6.0
After reading the very short official documents about the azure pipeline agent, I am getting very confused.
What exactly is an azure pipeline agent?
What is an agent job?
What's the relationship between agent and VM?
What's the relationship between agent job and VM? For each agent, one VM will be temporally assigned to it and will be back to the pool after the agent job finished?
If 2 different agent jobs run by 2 agents need the same running environment and the VM is agent job dependent. How should I retain the first agent job's running environment after it's finished running? Recreated again?
If each agent needs a VM, why create this concept? why not just directly use the VM or container?
Pipeline agent is machine where your build is performed. An agent is installable software that runs one job at a time.
Agent job is a set of steps which is recognized as execution boundary. Each job runs on an agent. All of the steps run together on the same agent.
From that perspective you can distinguish two kind of jobs - the onces installed on VM and onces instaled on container.
Agent job runs on agent which can be installed on VM. VM's are not assigned. Agents are assigned. There is agent pool, not VM pool.
I don't understand this one. Agents after finishing their job are going back to pool.
You may have more agents on on VM for instance one agents is installed on VM and few others as containers.
Please take a look here. You will find explanation for these concepts.