Running multiple build agents and deployment agents that service different Organisations on one Server - azure-devops

Is it possible to run multiple Azure Self-hosted build/deploy agents and multiple deployment agents on one server? Also, can these agents service more than one organisation or even multiple Azure AD Tenants?
I do realise the consequences with the server straining under IO bottlenecks and the like, these agents will probably never have to manage more than 3 projects being build and/or deployed at a time, but the sources can be from different projects in different organisations or possibly Tenants.
I have deployed my Deployment Agents to the servers and they function fine with a Microsoft-hosted build agent (my question is about ONE of these servers, it would apply to all of them eventually), but I am afraid to now start deploying the build agents to the same servers now.

This approach is very Do-able and is actually really cost-effective if you do not have continuous deployments or your virtual machine has the IO capacity to handle the planed traffic.
Understand the basics of an Agent. What exactly happens when you host a Windows Agent is that it creates a Windows Service which would run internally a separate new process and perform the actions for the agent.
Since these are independent processes, they are not at all impacted by the operations of other agents. As long as you are not trying to access the same files/resources this approach is actually a great approach and we should surely try this.

Related

Map service roles and replicas to servers with Azure DevOps Release

My project is Windows Service application which could be installed in several roles (the difference is in service name, exe path and some setting in app.config). Each role could be scaled horizontally by instances count. And all these {roles x replica counts} should be deployed over a set of servers in specific proportions for effective performance and utilization.
As an example:
ServerA
ServiceAlfa.1
ServiceAlfa.2
ServiceBravo
ServiceDelta
ServerB
ServiceBravo
ServiceCharlie
ServiceDelta.1
ServiceDelta.2
ServiceDelta.3
How can I achieve this with Azure DevOps (Dev17.M153.5) instruments?
I know brand new yaml pipeline introduces some conception of Environments and VM. It's just not available in latest stable version yet. But it's like a replacement for Deployment Groups early used for deployment to multiple machines, which I can use. I have already installed deployment agents and registered it. But I still cannot figure it out how better configure my complex mapping of instances to servers in release pipeline.
I can create a one job stage per role and link them with corresponding variable groups like
StageAlfa
ServerA:2
StageBravo
ServerA:1
ServerB:1
StageCharlie
ServerB:1
StageDelta
ServerA:1
ServerB:3
So I should check and compare the server name in my script
Or I can do the opposite: create a stage per machine and link it with variable group describing count of specific role replicas on current server. So in every stage I could select specific machine from deployment group by tag.
Looks like the second approach is simpler but they both are felt so awkward!
P.S. Windows Services on Machines not a containers in Kubernetes due to specific Windows software dependencies.
Your approaches are correct. You may consider migrating to Azure DevOps Service or upgrade to Azure DevOps Server 2020, which supports Envorinments and VM:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020?view=azure-devops#continuous-deployment-in-yaml

Running maintenance on self-hosted Azure DevOps agents

I have several self-hosted Azure DevOps agents (each installed on a dedicated on-prem server) and I need to perform reoccurring maintenance on them (i.e. patching, etc.). Is there's a good way to define those maintenance windows within Azure DevOps so that server admins could do their job without worrying to interrupt any ongoing build/release task?
There seem to be a setting related to configuring reoccurring maintenance (Organization Settings -> Agent Pools -> <Pool Name>-> Settings [tab]) but it seems as if it would apply to the whole pool and it's hard to tell which of the agents will be considered offline at which time slot.
Unfortunately I couldn't find any documentation about it and not sure if there's something that Azure DevOps would also be doing on the agent machines (i.e. running cleanup, updating agents and so on)
Currently, the process involves a person with admin permissions in Azure DevOps to disable an agent allowing a server admin to perform regular maintenance and to re-enable it back when server admin is done. It would be great if a server admin could not involve an Azure DevOps Admin every time for such routines.
Due to the fact that you have your own Azure Pipelines agents, then the maintenance should be easier and you will have total control of either having automatic maintenance or not. If you use Microsoft's hosted agents, you could not update the hosted agents from Microsoft because these agents are maintained by Microsoft exclusively.
The best way to do this is by having more than one agent on one machine instance then organize the agents on one pool. If you have multiple pools, then you can configure Azure DevOps to have different maintenance window schedule on each pool to have different time, and give some time to download and configure itself.
For example, I usually configure the maintenance window on weekend days such as Sunday early morning once a month on certain date. And for any pools I have I gave them intervals of 40 minutes on each pool to have maintenance to give enough time for the agent to download, update and restart itself.
Please consult these documentation further for detailed explanation and use cases:
For Azure DevOps Server:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops-2019
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops-2019
For Azure DevOps Service (on cloud TFS, formerly Visual Studio Team Services):
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/pools-queues?view=azure-devops

Can one private agent from VSTS be installed on multiple VM's?

I have installed agent on VM and configured a CI build pipeline. The pipeline is triggered and works perfectly fine.
Now I want to use same build pipeline, same agent, but different VM. Is this possible?
How will the execution happen for builds and on which VM will the source be copied?
Thank you.
Like the others I'm also not sure what you're trying to do and also think that the same agent across multiple machines is not possible.
But if you have to alternate or choose easily between VMs, you could set up for each of your VMs (used for this special scenario) an individual agent queue with one agent in that pool. That way you can choose the agent pool at queue time via the agent queue dropdown field. But that would only work if you're triggering manually, not in a typical CI scenario. In that case you would have to edit the definition to enforce any particular VM each time you want to swap VMs.
NO. These private agents are supposed to have a unique name and are assigned to an Agent Pool/Queue. They are polling up to VSTS/Azure Devops server if they have a job to do. Then they execute it. If you clone a machine with the same private build agent, then theoretically the agent that picks it up will execute the job, but that is theoretic. I really don't know how the Agent Queues will handle this.
It depends on what you want to do.
If you want to spread the workload, like 2 build servers and have builds go to whichever build server isn't busy, then you would create 1 Agent Pool/Queue. Create a Private Agent on one server and register it to that Pool, then on the second server un-register the agent and then re-register the agent add it to the SAME pool.
If you want to do work on 2 servers at the exact same time, like a deployment to 2 servers at the same time, then you would create a 'Deployment Group' and add both servers to that. You would unregister both agents from the Agent Pool/Queue. From your 'Deployment Group' copy the PowerShell script snippet and run it on each machine. This way you can use this in your Release Pipeline and deployments in parallel, which take less time to do deployments.
You could set up a variable in the pipeline so you can specify the name of the VM at build-time.
Also, once you have one or more agents, you would add them to an app pool. When builds are run, it will choose one agent from the pool and use that.

How to Autopatch Azure IaaS VMs

We have a public website hosted on two Azure IaaS VMs which are behind a Network Load Balancer. What are the available solutions to auto patch and reboot without impacting site availability?
I am looking for a solution like this
Suppress the IaaS VM in NLB to stop the traffic coming to the VM. (apply a network security group to stop the traffic)
Run the monthly patches/updates on the IaaS VM
Restart the IaaS VM
Enable the IaaS VM in NLB to allow the traffic.
Move on to next server
Are there any solution available for this in Azure?
or
do we need to prepare our own PowerShell scripts to do this? if its a PowerShell script how to make it run monthly once?
Are there any solution available for this in Azure?
I suggest you could use Update Management solution in the Operations Management Suite, you can now configure an automated patching schedule for your Azure IaaS VMs.
There you can define a one time, a weekly or monthly schedule. The possibility adding different VMs to different schedules ensures that your services running on Azure IaaS VMs will be always available during an automated patching schedule.
More information please refer to this link.

Azure Service Fabric-based Services: Prerequisite is always a prepared cluster?

If I've understood the docs properly, azure service fabric-based apps/microservices cannot be installed together with their service-fabric operational environment in one "packaged installer" step. For example, if I want to deploy a set of microservices on premises at a company that is running a typical windows server 2012 or VMWare IT center, then I'm out of luck? I'd have to require the company to first commit to (and execute) an installation of an azure app service fabric on several machines.
If this is the case, then the Azure Service Fabric is only an option for pure cloud operations where the service fabric cluster can be created on-demand by the provider or for companies that have already committed to azure service fabric. This means that a provider of classical "installer-based" software cannot evolve to the azure service fabric advantages since the datacenter policies of the potential customers is unknown.
What have I missed?
Yes, you always have to have a cluster to run Service Fabric Applications and Microservices. It is however not any more limited to a pure cloud environment, as of September last year the on-premise version of Azure Service Fabric for Windows Server went GA (https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/) and that lets you run your own cluster on your own machines (whether physical or virtual, doesn't matter) or in another data center (or even at another cloud provider).
Of course, as you say, this requires your customer company to either have their own cluster or that you set one up for them (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server). They will also need to have the competence to manage that cluster over time. It could be argued though that this shouldn't be much more difficult than managing a VMWare farm or setting up and managing say a Docker container host(s).
For the traditional 'shrink-wrapped-DVD-installer-type' of software vendor this might not be as easy as just supplying an .exe and some system requirements, i agree with you on that. If the customer can't or don't wan't to run their own cluster and cloud is not an option then it definitely adds additional complexity to selling and delivering your solution.
The fact that you can run your own cluster on any Windows Server environment means that there is no real lock-in to Azure as a cloud platform, I think that this is a big pro for SF as a framework. Once you have a cluster to receive your applications then you can focus on developing that, this cannot be said of most other cloud-based PaaS frameworks/services.