Command into the VM that runs a Azure DevOps pipeline - azure-devops

I'm new to Azure DevOps pipeline, currently nothing works,
I am using Azure DevOps Service with the hosted agent from Azure. Can I some how keep that VM that runs Azure DevOps pipeline running? I want to test my azure-pipeleines.yml file in the faster way via access this VM by terminal.

You cannot access Microsoft-hosted agents via terminal. They are assigned for your build and after they go to pool again to use by someone else.
If you want to access to agents you must have your own. You can create them on your own Azure VM's for instance.

He is right, hosted agents are just containers which are disposed when the pipeline is done. if you want to debug, like checking files or what it's not working, you need to have a self hosted agent. it can be on your own computer for debugging and you use the hosted one during normal processing

Related

Configure azure VM using terraform and ansible using azure cli and without ssh keys in azure pipelines

I'm using Azure Pipelines with terraform to create a dynamic infrastructure (unknown number of VMs, NIC..) in Azure.
I also tried to use Ansible in a separate stage to configure my VM, but I can't because the work is complicated because I had to create ssh keys and use them by Ansible, so I want to configure my Azure VM without using SSHAkey and use Azure CLI instead, I know that Ansible is agentless and use SSH to communicate with VM but I hope there is a module which allows us to configure VM using Azure CLI?
Alternatively, if anyone already has an existing project like this one and is well organised & less work, I would be grateful.

How to know which machines were skipped during DevOps Release using Azure Pipeline Agents for a Deployment Group

I'm using Azure Pipeline Agents on Machines and have those Machines in a Deployment Group and I have a DevOps Release which does some things on each machine. If the Azure Pipeline Agent isn't running on a machine at release time, the release will skip over this machine (see below image). How can I know which machines were skipped?
!]1
How can I know which machines were skipped?
The easiest way to check is that you can manually check the detailed deployment log.
For example:
Then you could get the skipped agent name.
On the other hand, you could also use the Rest API : Releases - Get Release. In the API response, you could check the Job Status and the Agent name.
Here is sample:
GET https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?api-version=6.0

Running maintenance on self-hosted Azure DevOps agents

I have several self-hosted Azure DevOps agents (each installed on a dedicated on-prem server) and I need to perform reoccurring maintenance on them (i.e. patching, etc.). Is there's a good way to define those maintenance windows within Azure DevOps so that server admins could do their job without worrying to interrupt any ongoing build/release task?
There seem to be a setting related to configuring reoccurring maintenance (Organization Settings -> Agent Pools -> <Pool Name>-> Settings [tab]) but it seems as if it would apply to the whole pool and it's hard to tell which of the agents will be considered offline at which time slot.
Unfortunately I couldn't find any documentation about it and not sure if there's something that Azure DevOps would also be doing on the agent machines (i.e. running cleanup, updating agents and so on)
Currently, the process involves a person with admin permissions in Azure DevOps to disable an agent allowing a server admin to perform regular maintenance and to re-enable it back when server admin is done. It would be great if a server admin could not involve an Azure DevOps Admin every time for such routines.
Due to the fact that you have your own Azure Pipelines agents, then the maintenance should be easier and you will have total control of either having automatic maintenance or not. If you use Microsoft's hosted agents, you could not update the hosted agents from Microsoft because these agents are maintained by Microsoft exclusively.
The best way to do this is by having more than one agent on one machine instance then organize the agents on one pool. If you have multiple pools, then you can configure Azure DevOps to have different maintenance window schedule on each pool to have different time, and give some time to download and configure itself.
For example, I usually configure the maintenance window on weekend days such as Sunday early morning once a month on certain date. And for any pools I have I gave them intervals of 40 minutes on each pool to have maintenance to give enough time for the agent to download, update and restart itself.
Please consult these documentation further for detailed explanation and use cases:
For Azure DevOps Server:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops-2019
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops-2019
For Azure DevOps Service (on cloud TFS, formerly Visual Studio Team Services):
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops

How secure is the Azure Pipelines Agent

How secure are the Azure Pipelines hosted Agents and can they be used for more sensitive tasks like code signing?
Consider using container jobs to run your builds. This ensures that everything is done within a disposable container and removed once your task is over. You can inject secrets via Azure KeyVault.

What is the recommended release definition for starting and stopping an Azure VM?

I'd like to enhance the release definition so that I don't need to have a separate environment that only starts an Azure VM.
If we take a scenario where we have a Test, Beta, Production environments. The client wants the application to be installed in Beta and Production on their local network. We internally want a Test environment to run E2E tests against, allow for non-technical folks to exercise the app without needing VPN access to the customer beta environment, etc.
So here we have Environment followed by where the Agent is running:
Test - Azure VM
Beta - Client machine
Production - Client machine
How we've solved this is to install the VSTS Agent on a machine at the client, which allows us to target that agent queue in the Beta and Production environments defined for that release. Then we typically build an Azure VM and target that agent queue for the Test environment.
We don't want to run that Azure VM 24/7/365. However if it's not running, then it can't respond to requests from Release Management.
What I've done is to create a environment named Start Test VM and Stop Test VM that use the Azure Resource Group Deployment to start and stop the VM. Those 2 additional environments can have their agent queue set to Hosted.
I'd like to figure out how to combine the first 3 environments into a logical Test instead of having to create 3 release management environments.
Start Test VM - Hosted
Test - Azure VM
Stop Test VM - Hosted
Beta - Client machine
Production - Client machine
The problem is that can be rather ugly and confusing when handing this over to one of our PM's or even myself when I circle back around 3 months later and think, "What the hell is this environment? Oh it's just there to start/stop the VM."
Options:
Stay with status quo - keep it like it is, it can't be fixed
We could open up a port on the Azure VM and use Powershell remoting. Then run on the Hosted agent or on an on-premise agent to start the VM, then deploy the application, then stop the VM. - we really dislike this because the deployment would not be the same as the client on-premise deploy. We'd like each environments' tasks to be the same, just with different variables.
You can use "Azure PowerShell" and "Azure SQL Database Deployment" tasks to configure your Azure VM and SQL or call other script to run on the Azure VM.
There isn't any way to set the agent for tasks. You can submit a feature request on VSTS User Voice for this.
And another way to reduce the environment is that: if you deploy every build linked to the release, then you can add "Start Test VM" task into your build definition to start the VM when build is successful and add "Stop Test VM" task into "Beta" environment.
What we've currently settled on is to continue with having an environment that isn't really what I would consider an environment, but more of a stage in the release pipeline that starts and/or shuts down a VM. We run that on a hosted agent so it can start the VM and make sure to check Skip artifacts download on the environment.
For a continuous integration build, we set a chain so the VM gets started, CI environment gets kicked off and then VM gets stopped. The remaining environments are then manually deployed up the chain as desired.
So here's an example:
Start CI VM
CI
Stop CI VM
Beta
Production
And here's an image of how it looks in Release Management as of 2016.06.27:
I put single quotes around environment because I think I agree with this user voice request in that, it's really more of a stage in the release pipeline. Much like database development, the logical and the physical don't necessarily map 1 to 1.