Teamcity - Prevent parallel execution of specific test classes over multiple agents - nunit

I am simultaneously running two agents to test one .net project. They're testing most of the time different branches (release branch + master).
In a few tests i am reading mails from our test company mailbox. Unfortunately it's not that easy to get a second mailbox for the second agent so both of the agent disturb each other a little bit. Is it possible to "lock" these few tests to prevent parallel execution on both agents?
System Specs:
TeamCity Enterprise 2021.1
Nunit Runner: NUnit 3
NUnit Console: 3.11.1

Yes it is. You want to make use of shared resources.
The Shared Resources build feature allows limiting concurrently
running builds using a shared resource, such as an external (to the CI
server) resource, for example, a test database, or a server with a
limited number of connections.
Create a resource in the parent project which represents the mailbox. Give the resource a quota limit of one.
In each of the test configurations add a shared resource build feature that requires an exclusive write lock on the resource you just created.
If another build tries to get a lock on the resource and it cannot be satisfied, the tests remain in the build queue.

Related

In Azure DevOps or Team City what is a build agent?

I'm fresh in the area of CD/CI and I need to implement it in an old project from my company. From what I have read we have a couple of options like Azure DevOps or Team City, I chose these two options because most of our projects are built in Microsoft technologies.
I have been reading for a while, but I cannot grasp the proper definition of a Build Agent; also, I found this old question, but the answer is unclear:
In Team foundation server what is build agent and controller?
Further, I read different documentation:
Azure Pipelines agents
Build Agent
And their definitions are the following ones:
An agent is installable software that runs one job at a time.
Microsoft
A TeamCity Build Agent is a piece of software that listens for the
commands from the TeamCity server and starts the actual build
processes. JetBrains
However, I cannot understand exactly their role or purpose, do they build the Test, UAT and Production pipelines in parallel to see if the compilations were successful? Like here:
Or what do they do? Since the solution has multiple projects inside maybe 8 or 10.
You can take the example that I gave in the comments below:
Let's suppose you create a project in Azure DevOps for your new CRM
for a Dentist with a Debug, UAT, and Release environments plus a team
of 5 devs. What would represent these agents?
Thanks for any explanation.
It's analogous to a human "agent" who has different skills. Think of Build Agent as computer process that has certain capabilities to perform a build.
Some agents can perform certain jobs (e.g. build Apple specific programs), while other agents are more general purpose. Sometimes a computer can have multiple agents that can work in parallel, other times computers only have a single agent assigned to them.
Edit - Added the following to address additional questions:
Agents can be "local" which means they are on the server where the build software (e.g., Azure Pipelines, Bamboo, Team City). They can also be "remote" or on a different computer. A remote agent may be needed to build Apple specific software as this software often needs a Mac to compile.
Extending the human "agent" with different skills analogy, agents can be assigned jobs. So one agent may be assigned building software in your pipeline while another agent is busy handling deployments to different environments. Since each "agent" can only do a single job at a time, more agents can speed up build pipelines by allowing parallel jobs.
In Azure Devops,there is a left Navigation called pipelines where you need to create a build pipeline(with certain tasks) which actually requires an agent to perform the tasks.
In general,build agent/agent is a hosted machine with necessary capabilities(in case of Azure devops) used to run the predefined tasks as per the build pipeline setup to build the source code and make it available for deployment

During a release, how to get a list of server names deployed to from a deployment group in a task to use in another job?

What is the way to get a list of server names that were deployed to so they can be used in another job with a different agent in the same deployment pipeline?
We have a number of servers in a deployment group that get deployed to. We would like to point an automated test server to each of these environments to confirm the deployment went correctly. Therefor we need a list of the servers that were deployed.
Since the list of servers could grow or shrink we can't hard code all the servers to a variable.
As a workaround we created a Powershell step to call the REST API to get the deployment group machine details. However, we would like to achieve this using variables / outputs etc in the Azure Devops interface.
One thing to be aware of is that variables you might set by command do not persist between phases. If you want to know the deployment servers that were deployed during a phase, you will need to find those during the test agent phase you are executing.
I think you answered your own question though. I believe most of the answers you get will be to use the API to get the information that you are desiring. That being said, the only real sure-fire was I think would be for you to add a step to the deployment group phase and let it run the tests on the deployment server.
Not the cleanest solution, but you could also have the deployment group trigger a build definition passing the server name. The build task would just have the testing portion that you want to run. You could have that release step depend on the completion/status of the build definition.
Some features to keep in mind when implementing whatever you decide:
Automatically deploy to new targets in a deployment group
Deploy to failed targets in a Deployment Group
From what I can see, there is no easy way to get at what you want. As per designer documentation:
"When you specify multiple jobs in a build pipeline, they run in parallel by default. You can specify the order in which jobs must execute by configuring dependencies between jobs. Job dependencies are not yet supported in release pipelines. Multiple jobs in a release pipeline run in sequence."
I would imagine this is due to the added complexity inherent in allowing jobs to be run on x number of machines.
The yaml documentation doesn't seem to make the same distinction, but I think it is still a not yet feature, as yaml release pipelines as a whole seem to be a roadmap item.

VSTS build agents - Can one computer run multiple build agents?

I have a Windows VM that hosts a VSTS build agent. Due to the number and length of builds that are running I would like to know whether multiple build agents can be hosted on one computer? That would allow a dedicated agent for slow builds, and a dedicated agent for quick builds.
https://www.visualstudio.com/en-us/docs/build/admin/agents/v2-windows
Yes you can run multiple agents in a single VM.
Make two directories say Agent1 and Agent2, extract the agent in each one of them and configure them with different names against your VSTS/TFS account.
It should work out of the box.
We run 4 agent jobs per machine concurrently with no issues. As mentioned above, should work out of the box. Just make sure you clean up directories. We have a script to do it every night
Yes, this works, I did the following:
Created a PAT for agent installation needs
Downloaded agent binaries from the agent creation page
Unpacked the archive contents into 2 different directories ("c:\ado-build-agents\agent1" and "c:\ado-build-agents\agent2")
Ran "config.cmd" and followed configuration instructions, provided by it.
Updated pipelines to build the agent pool, which those agents reside in ("Default" in my case)
To test the setup - triggered all 15 pipelines, that I had. As the result I was able to see two pipelines running at the same time, while others were in the "Queued" state (according to my expectations).
I will be also testing out how resources are consumed by the agents to try to understand if I should deploy more agents on the build machine.

How to put jobs in a category for the Throttle Concurrent Builds plugin for Jenkins

I have downloaded the TCB plugin for Jenkins. I have several builds that run tests. These builds must be run individually, as they access similar files that can cause tests to fail if more than one test build is running. I have been trying to find the place where I put the builds into a "category", so I can throttle the whole test category down to 1/1. I thought that it might be the Jenkins Views, but that did not do the job. How do you add jobs to a category?
This tag discusses the solution I desire: Jenkins: group jobs and limit build processors for this group. The only problem is that it doesn't say how to add them to categories.
You set up categories in the global Jenkins configuration (Manage Jenkins -> Configure System) and then assign jobs to categories. See the "Per Project-Category" section in the plugin documentation.

How to perform automated deployment - with a Pull model

We're currently doing continuous deployment to our dev/qa servers, and manually triggered automated deployment to our production boxes. Currently we're using TeamCity/PowerShell/MsDeploy. We now have a requirement to deploy to a server that sits on an external network, on which the target server cannot be accessed externally. Instead, it will have to "call home" for updates - and presumably then push the results back if it succeeds or not.
I'm thinking we could write a service that requests a particular URL on our build server with delivers the artifacts that would have been used for deployment, pull that down - and then fire off the build script.
However, I'm not entirely sure how we'd deal with updating the updater, and failures when they occur. Does anyone have any recommendations on how to approach this?
Sounds like you need a release repository. The build server pushes files into it and each deploy job pulls from it. This would neatly decouple the two processes.
A release repository could be as simple as a shared NAS, or something more sophisticated such as the Nexus repository manager.