How to put jobs in a category for the Throttle Concurrent Builds plugin for Jenkins - plugins

I have downloaded the TCB plugin for Jenkins. I have several builds that run tests. These builds must be run individually, as they access similar files that can cause tests to fail if more than one test build is running. I have been trying to find the place where I put the builds into a "category", so I can throttle the whole test category down to 1/1. I thought that it might be the Jenkins Views, but that did not do the job. How do you add jobs to a category?
This tag discusses the solution I desire: Jenkins: group jobs and limit build processors for this group. The only problem is that it doesn't say how to add them to categories.

You set up categories in the global Jenkins configuration (Manage Jenkins -> Configure System) and then assign jobs to categories. See the "Per Project-Category" section in the plugin documentation.

Related

Teamcity - Prevent parallel execution of specific test classes over multiple agents

I am simultaneously running two agents to test one .net project. They're testing most of the time different branches (release branch + master).
In a few tests i am reading mails from our test company mailbox. Unfortunately it's not that easy to get a second mailbox for the second agent so both of the agent disturb each other a little bit. Is it possible to "lock" these few tests to prevent parallel execution on both agents?
System Specs:
TeamCity Enterprise 2021.1
Nunit Runner: NUnit 3
NUnit Console: 3.11.1
Yes it is. You want to make use of shared resources.
The Shared Resources build feature allows limiting concurrently
running builds using a shared resource, such as an external (to the CI
server) resource, for example, a test database, or a server with a
limited number of connections.
Create a resource in the parent project which represents the mailbox. Give the resource a quota limit of one.
In each of the test configurations add a shared resource build feature that requires an exclusive write lock on the resource you just created.
If another build tries to get a lock on the resource and it cannot be satisfied, the tests remain in the build queue.

azure devops run build pipelines sequentially

Whenever multiple builds are running on the same pipeline only the first build completes and the rest error.
I would the builds to run sequentially instead of in parallel like they are now, so that if two developers check-in at about the same time, the second build will also complete.
When editing the pipeline in the execution plan under the Parallelism heading I have the radio button set to "none" and they still seem to try to run in parallel anyway.
Can anyone suggest how to solve this issue?
enter image description here
It seems you have multiple build agents. Assuming you are using self-hosted build agents, you could specify certain demands of the agent to use only one agent. In this way, if the agent is not free, the build will keep waiting. To use a particular agent, add a demand of Agent.Name equals agentname, check the screenshot below. Agent name can be found in capabilities of the agent.

During a release, how to get a list of server names deployed to from a deployment group in a task to use in another job?

What is the way to get a list of server names that were deployed to so they can be used in another job with a different agent in the same deployment pipeline?
We have a number of servers in a deployment group that get deployed to. We would like to point an automated test server to each of these environments to confirm the deployment went correctly. Therefor we need a list of the servers that were deployed.
Since the list of servers could grow or shrink we can't hard code all the servers to a variable.
As a workaround we created a Powershell step to call the REST API to get the deployment group machine details. However, we would like to achieve this using variables / outputs etc in the Azure Devops interface.
One thing to be aware of is that variables you might set by command do not persist between phases. If you want to know the deployment servers that were deployed during a phase, you will need to find those during the test agent phase you are executing.
I think you answered your own question though. I believe most of the answers you get will be to use the API to get the information that you are desiring. That being said, the only real sure-fire was I think would be for you to add a step to the deployment group phase and let it run the tests on the deployment server.
Not the cleanest solution, but you could also have the deployment group trigger a build definition passing the server name. The build task would just have the testing portion that you want to run. You could have that release step depend on the completion/status of the build definition.
Some features to keep in mind when implementing whatever you decide:
Automatically deploy to new targets in a deployment group
Deploy to failed targets in a Deployment Group
From what I can see, there is no easy way to get at what you want. As per designer documentation:
"When you specify multiple jobs in a build pipeline, they run in parallel by default. You can specify the order in which jobs must execute by configuring dependencies between jobs. Job dependencies are not yet supported in release pipelines. Multiple jobs in a release pipeline run in sequence."
I would imagine this is due to the added complexity inherent in allowing jobs to be run on x number of machines.
The yaml documentation doesn't seem to make the same distinction, but I think it is still a not yet feature, as yaml release pipelines as a whole seem to be a roadmap item.

Is it possible to have Jenkins workflows with an overlapping/shared stage?

This question concerns use of the Jenkins Workflow plugin and "synchronizing" a stage amongst independent jobs.
We have a generic workflow for multiple projects with steps:
build project
push project to test environment
run (long) end-to-end test suite
push project to production
Step 3 runs a long time. If multiple projects are built and pushed to the test environment within the same window of time, we'd like to only run once the end-to-end test suite.
Can we have the jobs some how synchronize on step 3?
The desired orchestration can be achieved by make Step 3 a build action. I.e.
build end-to-end-tests
Where end-to-end-tests is a job dedicated to running the slow end-to-end tests.
Adding a Quiet period to end-to-end-tests supports the goal of "collecting" projects updated within a time period to end-to-end test. That is, if project A and B are pushed to the test environment with Quiet period seconds, then end-to-end-tests runs only once.
JENKINS-30269 might be helpful, but your use case is indeed subtly different from the usual one that RFE would solve; you really seem to need a cross-job stage, which is not currently possible though in principle such a step could be written. In the meantime, a downstream deployment job is probably the most reasonable workaround.

Build Flow vs Build Pipeline

I'm trying to split up a few Jenkins jobs using the Build Flow plugin so that instead of three monolithic jobs, we have three "starting points" that then use the DSL to trigger downstream jobs. I chose Build Flow over the Build Pipeline plugin because it seemed like it was a lot harder to share jobs between different pipelines ( ie, sharing the workspace of the multiple starting jobs with a single compile job ).
Previously, I had three jobs set up: Project-PR, Project-DEV, and Project-PROD.
Project-PR would build whenever a pull request happened in GitHub, and would just run a smaller subset of our unit tests, so that we could get quick verification that the PR is okay to merge.
Project-DEV would build whenever a feature branch was merged in GitHub into the main development branch, as well as having the ability to be manually triggered and given a different branch to pull. It would run the full suite of unit -- basically a sanity check that everything is still good. Then it would compile and minify, and push to a QA environment for testing, and then it would run the full suite of integration tests against that QA environment. This step was configured as a parametrized build, with the parameter being the name of the branch to pull, test, and push. It would push to and set up QA environment specific to that branch, so that we could QA multiple features without having to merge to development ( ie, feature-one.qa.example.com, feature-two.qa.example.com ).
Project-PROD would only ever be manually triggered, and would do the full unit and integration test suite, compile and minify the front-end code ( Less, JS, and CSS ), and push the built code into a special "release branch" in GitHub that can then be deployed -- we haven't quite reached the point of Jenkins being in charge of deployment.
Now, what I wanted to set up was to split the subtasks into their own jobs, so that it'd be easy to set up new jobs to without having to copy and paste all the build steps ( or copying the job and changing all the things that need to be unique ). This would let us do things like create a copy of the Project-DEV, but switch out the last job for one that deploys to a staging environment set up in the cloud. Or easily create a job that could report test results to a third party source, ie copy the results to a shared network folder or something. Or any number of things. The goal is basically to use these subtask jobs as building blocks to let us build more complicated jobs, while also making it easier to update how one portion of the build works ( for example, maybe we switch to a different technology for compiling, which might change how Jenkins would compile the code ).
For example, the Project-PR would be split into the following:
Project-PULL -> Project-SetupBuildEnv -> Project-PartialUnitTests
(BuildFlow) (Normal Job) (Normal Job)
The SetupBuildEnv would just pull down any NPM or Composer requirements, and set up the directories required for testing and building. PartialUnitTests then run, and report it's results back up to the
The Project-DEV could be split up like so:
Project-DEV -> Project->SetupBuildEnv -> Project-FullUnitTests -> Project-Compile -> Project-Minify -> Project-DeployQA -> Project-FullIntegrationTests
This way, the parts of the build process that are shared ( in this case, Project-SetupBuildEnv ) can be easily shared between jobs, reducing duplication, and making it easier to update a step in the build process without having to remember EVERY job that uses that step.
Right now, I'm using the Shared Workspace plugin so that all the steps use the same workspace. However, I'm running into an issue with that: it's not actually using one workspace. What's happening is that the Build Flow job will get a directory ( eg: /sharedspace/shared_one ), and download the code from GitHub into there. Then it will trigger the DSL, which starts up the 'SetupBuildEnv' job. But instead of working inside the same directory, it will get a directory with a name like "/sharedspace/shared_one#2", and run the build setup task in there. Then when it goes to do the third step ( unit testing ), it fails, because now it's got a third directory ( /sharedspace/shared_one#3 ), but that directory didn't have the setup run, so the required node and composer modules are missing. What's weird is that it looks like the Shared Workspace plugin is copying the first shared workspace to another directory and incrementing a counter ( the #N part of the directory name ) and giving that to the other jobs to work in.
So, question time:
is there a way to fix the Shared Workspace plugin so that it's actually only using one directory for each job?
if not, is it possible to have the Clone Workspace plugin take an argument, so I can specify what archived workspace to use instead of using the dropdown?
another possiblity: would using the shared workspace plugin, but use the "Local subdirectory for repo (optional)" option in the advanced git job options to specify the directory to use?
failing all that, is there some other way to set up a build pipeline that can share jobs with other pipelines that I've missed out on?
In my experience, even if you do get this working, this might not be a scalable way to go longer term. We've found the shared workspace plugin entirely a bad idea for long / complex builds (similar reasons to yours - but also: scaling across dozens of slaves becomes hard suddenly). Arguably the idea is slightly against the spirit of modern scalable CI.
I'd instead delegate more to your build tools, be they Maven / Gradle, Ant, even Grunt, whatever. If you want to keep these builds truly modular, but can't afford to rebuild at each step (we decided full independence was worth wasting a few minutes per build) then perhaps look at creating useful artefacts at key stages - in your case, minified assets TARs, library JARs, or maybe webjars, or whatever, and deploy them to a (Maven?) repo.
Later build steps in your pipeline can quickly, easily, and repeatably pull the latest (or named version) assets from this centralised repo, and continue with the build process.
An alternative (with similarities) is to build one or more assets, but only promote them after increasing numbers of tests are run, which can be done in separate builds coordinated by your build flow, using the Promoted Builds plugin etc.