Install multiple Azure DevOps environment agents on server - azure-devops

We have a dev server hosting webservices from multiple Azure DevOps projects. To use yaml deployment pipelines, we migrated from deployment pools to environments/resources. Unlike deployment pools neither environments nor resources can be shared between projects. You can upvote here to change that.
We work around this as follows.
Create an environment for each project.
For each environment add the dev server as a resource.
Install one environment agent per project on the server.
Unfortunately, this creates a naming conflict if there is already an environment agent installed on the server.
The service already exists: vstsagent.MyDevOpsAccount..MyServer, it will be replaced
Error: Operation CreateService failed with return code 1072

TLDR: In the powershell installation script, change --agent $env:COMPUTERNAME to --agent "$env:COMPUTERNAME-MyProject"
The reason seems to be that the windows service name of the agent is determined as follows.
serviceName = StringUtil.Format(serviceNamePattern, accountName, settings.PoolName, settings.AgentName);
It uses the Azure DevOps organization name, the name of the deployment pool and the agent name. Since the organization name is fixed and the deployment pool name unavailable for environment agents, the agent name seems to be the only chance.

Related

Map service roles and replicas to servers with Azure DevOps Release

My project is Windows Service application which could be installed in several roles (the difference is in service name, exe path and some setting in app.config). Each role could be scaled horizontally by instances count. And all these {roles x replica counts} should be deployed over a set of servers in specific proportions for effective performance and utilization.
As an example:
ServerA
ServiceAlfa.1
ServiceAlfa.2
ServiceBravo
ServiceDelta
ServerB
ServiceBravo
ServiceCharlie
ServiceDelta.1
ServiceDelta.2
ServiceDelta.3
How can I achieve this with Azure DevOps (Dev17.M153.5) instruments?
I know brand new yaml pipeline introduces some conception of Environments and VM. It's just not available in latest stable version yet. But it's like a replacement for Deployment Groups early used for deployment to multiple machines, which I can use. I have already installed deployment agents and registered it. But I still cannot figure it out how better configure my complex mapping of instances to servers in release pipeline.
I can create a one job stage per role and link them with corresponding variable groups like
StageAlfa
ServerA:2
StageBravo
ServerA:1
ServerB:1
StageCharlie
ServerB:1
StageDelta
ServerA:1
ServerB:3
So I should check and compare the server name in my script
Or I can do the opposite: create a stage per machine and link it with variable group describing count of specific role replicas on current server. So in every stage I could select specific machine from deployment group by tag.
Looks like the second approach is simpler but they both are felt so awkward!
P.S. Windows Services on Machines not a containers in Kubernetes due to specific Windows software dependencies.
Your approaches are correct. You may consider migrating to Azure DevOps Service or upgrade to Azure DevOps Server 2020, which supports Envorinments and VM:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops#continuous-deployment-in-yaml

How to know which machines were skipped during DevOps Release using Azure Pipeline Agents for a Deployment Group

I'm using Azure Pipeline Agents on Machines and have those Machines in a Deployment Group and I have a DevOps Release which does some things on each machine. If the Azure Pipeline Agent isn't running on a machine at release time, the release will skip over this machine (see below image). How can I know which machines were skipped?
!]1
How can I know which machines were skipped?
The easiest way to check is that you can manually check the detailed deployment log.
For example:
Then you could get the skipped agent name.
On the other hand, you could also use the Rest API : Releases - Get Release. In the API response, you could check the Job Status and the Agent name.
Here is sample:
GET https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?api-version=6.0

VSTS/DevOps BizTalk automatic deployment to multiple servers in BizTalk Group (BizTalk Server Application Deployment)

We are thinking of moving from BTDF to the new VSTS automatic deployment mechanism.
In my test setup, deploying to a single node BTS server worked just fine, but I wonder how it is done having a BizTalk group with multiple servers.
In BTDF the .msi's needed to be run on all nodes (once with 'this is the first server in the group' checked) in order to once create the application and on the other nodes to just install and GAC resources...
Is this being done automatically by the 'Deploy BizTalk Server Application' deployment task or do I have to run it once with 'Create new BizTalk Server application' and on the other servers with 'Install BizTalk Server Application' set?
If yes, do I simply run it on the deployment agent of the node with the management db or would I deploy to a deployment group/environment resource group containing all nodes?
You must to run the task with "Deploy..." to import and install GAC on a primary server (anyone of your servers). This deployment will create a share with the full MSI.Then, run the deploy task with "Install..." to install only GAC on the secondary servers. I have setted up a CI-CD pipeline and below what i created(farme of 3 servers):
Create deployment group with 3 servers(one agent/server)
Create a tag on a server primary
Create a tag on secondary servers to be
tagged secondary
In the pipeline , you add 2 jobs: one to run only
on primary server, filtering on the primary capability. and the
second to filter only the secondary ones.
On the first job, the
deploy task will run to import to biztalk db and run msi, the second
will only run msi on secondaries
I hope you have already visited Microsoft's documentation page Configure automatic deployment with Visual Studio Team Services in BizTalk Server. Please see Provision deployment groups to create a deployment group of multiple servers(this is part of the 'Step 2'). Once the Deployment groups are created please make use of them in the 'Release' Phase of the CICD pipeline as shown in this GIF image Release Pipeline BTS Deploy Groups
Get started
Step 1: Add Application project & update .json template
Step 2: Create the VSTS token & install the build agent
Step 3: Create the build and release definitions

Is Build Agent and Deployment Group Are Same?

I am using Visual Studio Online ( Azure Devops).
I am using On Premise Deployment.
When we create deployment group and It provide us some poweshell script and when executed it create window service (VSTS Agent)
Other thing is that we download agent pool and configure and run.
Is Agent Pool and Pool that is started using powershell are same ?
A deployment group and a normal build/release agent are not the same thing. It's the same agent software, but agents registered to deployment groups are for a specific purpose and are not available in the normal agent pools.
Please refer to the documentation.

What is the recommended release definition for starting and stopping an Azure VM?

I'd like to enhance the release definition so that I don't need to have a separate environment that only starts an Azure VM.
If we take a scenario where we have a Test, Beta, Production environments. The client wants the application to be installed in Beta and Production on their local network. We internally want a Test environment to run E2E tests against, allow for non-technical folks to exercise the app without needing VPN access to the customer beta environment, etc.
So here we have Environment followed by where the Agent is running:
Test - Azure VM
Beta - Client machine
Production - Client machine
How we've solved this is to install the VSTS Agent on a machine at the client, which allows us to target that agent queue in the Beta and Production environments defined for that release. Then we typically build an Azure VM and target that agent queue for the Test environment.
We don't want to run that Azure VM 24/7/365. However if it's not running, then it can't respond to requests from Release Management.
What I've done is to create a environment named Start Test VM and Stop Test VM that use the Azure Resource Group Deployment to start and stop the VM. Those 2 additional environments can have their agent queue set to Hosted.
I'd like to figure out how to combine the first 3 environments into a logical Test instead of having to create 3 release management environments.
Start Test VM - Hosted
Test - Azure VM
Stop Test VM - Hosted
Beta - Client machine
Production - Client machine
The problem is that can be rather ugly and confusing when handing this over to one of our PM's or even myself when I circle back around 3 months later and think, "What the hell is this environment? Oh it's just there to start/stop the VM."
Options:
Stay with status quo - keep it like it is, it can't be fixed
We could open up a port on the Azure VM and use Powershell remoting. Then run on the Hosted agent or on an on-premise agent to start the VM, then deploy the application, then stop the VM. - we really dislike this because the deployment would not be the same as the client on-premise deploy. We'd like each environments' tasks to be the same, just with different variables.
You can use "Azure PowerShell" and "Azure SQL Database Deployment" tasks to configure your Azure VM and SQL or call other script to run on the Azure VM.
There isn't any way to set the agent for tasks. You can submit a feature request on VSTS User Voice for this.
And another way to reduce the environment is that: if you deploy every build linked to the release, then you can add "Start Test VM" task into your build definition to start the VM when build is successful and add "Stop Test VM" task into "Beta" environment.
What we've currently settled on is to continue with having an environment that isn't really what I would consider an environment, but more of a stage in the release pipeline that starts and/or shuts down a VM. We run that on a hosted agent so it can start the VM and make sure to check Skip artifacts download on the environment.
For a continuous integration build, we set a chain so the VM gets started, CI environment gets kicked off and then VM gets stopped. The remaining environments are then manually deployed up the chain as desired.
So here's an example:
Start CI VM
CI
Stop CI VM
Beta
Production
And here's an image of how it looks in Release Management as of 2016.06.27:
I put single quotes around environment because I think I agree with this user voice request in that, it's really more of a stage in the release pipeline. Much like database development, the logical and the physical don't necessarily map 1 to 1.