My project is Windows Service application which could be installed in several roles (the difference is in service name, exe path and some setting in app.config). Each role could be scaled horizontally by instances count. And all these {roles x replica counts} should be deployed over a set of servers in specific proportions for effective performance and utilization.
As an example:
ServerA
ServiceAlfa.1
ServiceAlfa.2
ServiceBravo
ServiceDelta
ServerB
ServiceBravo
ServiceCharlie
ServiceDelta.1
ServiceDelta.2
ServiceDelta.3
How can I achieve this with Azure DevOps (Dev17.M153.5) instruments?
I know brand new yaml pipeline introduces some conception of Environments and VM. It's just not available in latest stable version yet. But it's like a replacement for Deployment Groups early used for deployment to multiple machines, which I can use. I have already installed deployment agents and registered it. But I still cannot figure it out how better configure my complex mapping of instances to servers in release pipeline.
I can create a one job stage per role and link them with corresponding variable groups like
StageAlfa
ServerA:2
StageBravo
ServerA:1
ServerB:1
StageCharlie
ServerB:1
StageDelta
ServerA:1
ServerB:3
So I should check and compare the server name in my script
Or I can do the opposite: create a stage per machine and link it with variable group describing count of specific role replicas on current server. So in every stage I could select specific machine from deployment group by tag.
Looks like the second approach is simpler but they both are felt so awkward!
P.S. Windows Services on Machines not a containers in Kubernetes due to specific Windows software dependencies.
Your approaches are correct. You may consider migrating to Azure DevOps Service or upgrade to Azure DevOps Server 2020, which supports Envorinments and VM:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops#continuous-deployment-in-yaml
Related
Cheers!
Maybe some of you already have done something similar.
We created a dedicated, self hosted AZ DevOps Agent Pool in one of our subscriptions with terraform.
So terraform being terraform and DevOps doing its magic with the agent pools, any major update on the scale set for now results to a recreation of the scale set with corresponding downtime. We know about the necessary ignore_changes lifecycle changes which would probably prevent that, but they are not yet implemented.
So my question is: has anyone experience how AZ DevOps reacts when you change the target Scale Set of a running Agent Pool?
Meaning just changing the target ScaleSet via the Azure DevOps Portal.
A little downtime is fine with we but we would really love to being able to deploy the new infra running parallel to the old agent set and then switch via the portal. Like a standard Blue/Green deployment scheme.
Also having a fallback to the old agent pool would be a major bonus.
As long as an Agent Pools doesn't support more than 1 scale sets that seemed to be the most viable solution.
Anyone here ever tried anything like this?
Thanks!
To answer my own question:
We just pulled the plug and switched over to a new Scale Set.
The downtime is immediately, because DevOps scales the "old" scale set to 0 right after.
After approximately 10-15 minutes Azure DevOps started to scale the new instances up and added them to the agent pool.
So in a nutshell: Blue/Green deployment of the Scale Set worked basically. You can schedule new jobs while the agents are down but running at the time of the switch are terminated as the instances are deleted right away
We currently have 4 Azure DevOps team projects that require two Deployment Groups to be created for their SIT and UAT release pipelines. All 4 team projects will share the two Deployment Groups, the idea being to create the deployment group from one team project and then sharing or extending it to the other 3 (which I believe is common practice).
My main concern though is that due to some budget constraints, the decision has been taken to create both SIT and UAT Deployment Groups on a single target server. Much as I strongly believe this is probably not best practice, are there any technical reasons why this cannot or shouldn't be implemented?
In simple terms, deployment groups is that:
A deployment group is a logical set of deployment target machines that
have agents installed on each one. Deployment groups represent the
physical environments; for example, "Dev", "Test", "UAT", and
"Production". In effect, a deployment group is just another grouping
of agents, much like an agent pool.
We support registering the same machine to multiple deployment groups. However you would need to edit the agent name in our PS "registration script" provided in the UI. Or log into the machine physically and execute the script in a different folder than the default one specified in the script.
Normally, we set up deployment groups with multiple agents and run deployment just to target agent according to requirements.
What you can do is assign tags to deployment agents and use tags to assign releases to specific agents.
In summary, it's able to register the same machine/server to multiple deployment groups if you insist on.
But due to server performance, environmental isolation , Disaster Tolerance and other factors.
Is it possible to run multiple Azure Self-hosted build/deploy agents and multiple deployment agents on one server? Also, can these agents service more than one organisation or even multiple Azure AD Tenants?
I do realise the consequences with the server straining under IO bottlenecks and the like, these agents will probably never have to manage more than 3 projects being build and/or deployed at a time, but the sources can be from different projects in different organisations or possibly Tenants.
I have deployed my Deployment Agents to the servers and they function fine with a Microsoft-hosted build agent (my question is about ONE of these servers, it would apply to all of them eventually), but I am afraid to now start deploying the build agents to the same servers now.
This approach is very Do-able and is actually really cost-effective if you do not have continuous deployments or your virtual machine has the IO capacity to handle the planed traffic.
Understand the basics of an Agent. What exactly happens when you host a Windows Agent is that it creates a Windows Service which would run internally a separate new process and perform the actions for the agent.
Since these are independent processes, they are not at all impacted by the operations of other agents. As long as you are not trying to access the same files/resources this approach is actually a great approach and we should surely try this.
We would like to try building a release pipeline for our product in VSTS - however, our product requires a separate instance of the application per customer (there are some legacy in the picture here :)). What we THINK we want, is a process like this:
For each customer:
Update DB schema
Configure a container, with customer-specific configuration etc.
Publish the container into Azure Container Registry
Deploy the container in Azure Container Service (OR on-prem if the customer runs on-prem)
The configuration can be multiple things: Extensions of the API in the application (new DLLs basically), connection strings, ...
I figure we can do this fairly easily using a custom PowerShell script, but I would like to not write anything custom (at least for the "looping" issue) if I don't have to. We could also create separate environments in VSTS for each customer, but that seems quite unmaintainable with well over 100 customers.
Some additional details:
- There's a separate DB per customer
- There's two separate web applications per customer
So what's the best practice here? Any advice? Thanks! :-)
You could think of doing it in two ways.
1 - By creating one environment for each customer. So you could have the exact same tasks for each environment, or have the flexibility to change steps in a particular environment.
This approach would give you also the ability to use a flow pipeline, because your build will be released only after is passes your internal QA and other processes.
To do it easily, you could also create task groups to reuse then in each environment.
2 - The other way is to create create separate releases for each customer or group of customers. This will also give you the same flexibility, you can use your builds, but you have to add some extra steps to make sure you are using the right build, since you can choose any build when you create a release, which you can do mannualy.
Updated
A third option could be to create on environment for all customers and then have the one deployment agent installed for every customer, using all of them on the same deployment group. Then have one file with all your variables per customer, with the file named with the agent name, and a powershell script that uses the agent name variable to find what file to run. This powershell script would then run all your individual configurations.
In that case, I suspect that you would end up doing almost all your deployment in powershell, which could be more time consuming for you to maintain. You also have to keep in mind that in this particular scenario you would update all your customers the same time, because all agents would be on the same deployment group.
Lets say I'm deploying an Azure VM by scripting which requires multiple resources interdependent on one another.
Lets say a NIC fails,
Does the deployment script still go through to the deployment of the VM? So that I have a VM with no NIC
Does it fail rolling back the entire script?
No ARM templates are not executed within a transaction.
It's possible that you have deployed resources without deploying the whole ARM template. In your case it's not possible to have a VM without a NIC (But you test the deployment of the ARM template and make it work at the end.)
It does not roll back.
I think now they have added "--rollback-on-error" flag to rollback on last successful deployment if the deployment fails. Also you can specify the name of the deployment to which you need to roll back refer this: https://learn.microsoft.com/en-us/cli/azure/group/deployment?view=azure-cli-latest#commands.