can we select specific build agents for different configuration's(Release or Debug) for parallel builds - parallel-builds

I am using Parallel Template(from Jim Lamb's Blog) to run builds for two configuration
one for "Release"
second for "Debug"
I have two agents on my controller.
Every time I run this build it selects for the available agent from the controller and randomly selects the agents for the two builds. Sometimes "Release" run on Agent 1 and "Debug" on Agent 2 and sometimes "Debug" runs on Agent 1 and "Release" on Agent 2.
I want to control this activity.
Is there a way to choose specific agents for the two configurations...???
for example....
Suppose I have a controller(Controller 1) with two agents(Agent 1 and Agent 2).
I want to select Agent 1 for "Release" configuration and
Agent 2 for "Debug" configuration.

When we provide the Name Filter = Default agent. It takes whichever agent is available and continue with that agent. If we want to choose any specific agent we can use a simple assign activity just after get the build agent to choose the specific build agent as used below.
agentsettings.name= "Agent name".
This will let us choose the build agent that we wanted it to perform the task.

One way would be to put a build agent tag on each of the agents (via the TFS Admin Console) that corresponds to the configuration you want to run on that agent. Then customize your build template (workflow) so that is specifies the appropriate tag criteria when selecting the agent via the Run On Agent activity.

Related

How to resolve "No hosted parallelism has been purchased or granted" in free tier?

I've just started with Azure DevOps pipelines and just created a very simple pipeline with a Maven task. For now I don't care for parallelism and I'm not sure in which way I've added it to my pipeline. Is there any way to use the Maven task on the free tier without parallelism?
This is my pipeline:
trigger:
- master
pool:
vmImage: ubuntu-latest
steps:
- task: Maven#3
My thought was that tasks are always parallel? Other than that I cannot see where's the parallel step.
First - tasks are always executed sequentially. And 1 sequential pipeline is documented as "1 parallel agent", yes naming could be better. Due to the changes laid out below new accounts now get zero parallel agents, and a manual request must be made to get the previous default of 1 parallel pipeline and the free build minutes.
See this:
We have temporarily disabled the free grant of parallel jobs for public projects and for certain private projects in new organizations. However, you can request this grant by submitting a request. Existing organizations and projects are not affected. Please note that it takes us 2-3 business days to respond to your free tier requests.
More background information on why these limitations are in play:
Change in Azure Pipelines Grant for Private Projects
Change in Azure Pipelines Grant for Public Projects
Changes to Azure Pipelines free grants
TLDR; People were using automation to spin up 1000's of Azure DevOps organizations, adding a pipeline and using the service to send spam, mine bitcoin or for other nefarious purposes. The fact that they could do so free, quick and without any human intervention was a burden on the team. Automatic detection of nefarious behavior proved hard and turned into an endless cat-and-mouse game. The manual step a necessary evil that has put a stop to this abuse and is in no way meant as a step towards further monetization of the service. It's actually to ensure a free tier remains something that can be offered to real peopjle like you and me,
This is absurd. 'Free-tier' is not entirely free unless you request again!
Best Option: Use self-hosted pool. It can be your laptop where you would like to run tests.
MS azure doc here
and use above pool in YAML file
pool: MyPool
Alternatively
Request access to MS:
Folks, you can request here. Typically it get approved in a day or two.
##[error]No hosted parallelism has been purchased or granted. To request a free parallelism grant, please fill out the following form https://aka.ms/azpipelines-parallelism-request
The simplest solution is to change the project from public to private so that you can use the free pool. Private projects have a free pool by default.
Consider using a self hosted pool on your machine as suggested otherwise.
Here's the billing page.
If you're using a recent version of MacOS with Gatekeeper, this "security enhancement" is a serious PITA for the unaware as you get 100s of errors where each denied assembly has to be manually allowed in Security.
Don't do that.
After downloading the agent file from DevOps and BEFORE you unzip the file, run this command on it. This will remove the attribute that triggers the errors and will allow you to continue uninterrupted.
xattr -c vsts-agent-osx-x64-V.v.v.tar.gz ## replace V.v.v with the version in the filename downloaded.
# then unpack the gzip tar file normally:
tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
Here are all the steps you need to run, including the above, so that you can move past the "hosted parallelism" issue and continue testing immediately, either while you are waiting for authorization or to skip it entirely.
Go to Project settings -> Agent pools
Create new Agent pool, call it "local" (Call it whatever you want, or you can also do this in the Default agent pool)
Add a new Agent and follow the instructions which will include downloading the Agent for your OS (MacOS here).
Run xattr -c vsts-agent-osx-x64-V.v.v.tar.gz on the downloaded file to remove the Gatekeeper security issues.
Unzip the archive with tar xvfz vsts-agent-osx-x64-V.v.v.tar.gz
cd into the archive directory and type ./config.sh Here the most important configuration option is Server URL which will be https://dev.azure.com/{organization name} Defaults are fine for the rest. Continue until you are back at the command prompt. At this point, if you were to look inside DevOps either in your new agent pool or Default (depending on where you put it) You'll see your new agent as "offline" so run:
./run.sh which will bring your agent online. Your agent is now running and listening for you to start your job. Note this will tie up your terminal window.
Finally, in your pipeline YAML file configure your job to use your local agent by specifying the name of the agent pool where the self-hosted agent resides, like so:
trigger:
- main
pool:
name: local
#pool:
# vmImage: ubuntu-latest
I faced the same issue. I changes the project visibility from Public to Private and then it worked. No requirement to fill a form or to purchase anything.
Best Regards,
Hitesh

how does Microsoft hosted agent relate to vmImage types?

I am a free tier user of Azure DevOps, as indicated in https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops#use-a-microsoft-hosted-agent, each user is entitled to 10 parallel jobs.
when i login to see the available agent in the Azure Pipeline pool, I see the following:
I am just curious, are the agents listed here equivalent to 10 virtual machines? if so, how man of them are windows images? how many mac images? or those are just wild cards and they can be provisioned to be any vmImage type during run time?
thanks!
Or those are just wild cards and they can be provisioned to be any
vmImage type during run time?
Just consider them as wildcards, they can be provisioned to be any vmImage type during queue time.
Details:
Azure Devops provides some predefined variables about agent, for me I use this script in CMD task to confirm statements above:
echo ID: $(Agent.Id)
echo OS: $(Agent.OS)
echo Name: $(Agent.Name)
echo MachineName: $(Agent.MachineName)
We can disable some of those agents and enable part of them, then we can make sure one specific agent is used to run the pipeline. Here's part of the result list:
So you can consider it as wildcard, those agent can represent any vmImage type. It's not recommended to disable any of those agents in normal situation, it's just for test purpose. Normally if those agents are enabled in Public project, you can easily run ten pipelines(no matter what OS system) at the same time.

Can I pass the VCAP_SERVICES to the test stage at the IBM Cloud Continuous Delivery pipeline?

When in (unit) test stage I'm running the following commands:
echo "Installing Node Modules"
npm install
echo "Run Unit Tests"
npm run test-mocha
My problem is that I cannot access the VCAP_SERVICES in the test stage (job is set to unit test).
Is there a way to access / pass them?
The only way I see, is using the cf cli over the provided shell in that stage. But that would require authentication and you do not want to store your user date there for sure.
So one way would be to store the data in the provided environment tab for that stage. Then you have to adapt these data, in case something is changed, because it is not provided by the vcap file but that seems to be how it is for the test stage at least.
As already mentioned, the best way to use VCAP_SERVICES in the test stage is to set it yourself in the stage's Environment Properties configuration.
The pipeline is the build environment. It needs to be able to run even if the app is not yet deployed or has crashed. We sometimes copy in values from the runtime environment, but the build environment should minimize its dependencies on the runtime environment wherever possible.
There's also the question of the pipeline workers being able to access the runtime services specified in VCAP_SERVICES. For the services I've used in my pipelines it has always worked, but it's not a guaranteed thing.

How to trigger a build within a build chain after x days?

I am currently using Teamcity to deploy a web application to Azure Cloud Services. We typically deploy using powershell scripts to the Staging Slot and thereafter do a manual swap (Staging to Production) on the Azure Portal.
After the swap, we typically leave the Staging slot active with the old production deployment for a few days (in the event we need to revert/backout of the deployment) and thereafter delete it - this is a manual process.
I am looking to automate this process using Teamcity. My intended solution is to have a Teamcity build kick off x days after the deployment build has suceeded (The details of the build steps are irrelevant since I'd probably use powershell again to delete the staging slot)
This plan has pointed me to look into Teamcity build chains, snapshot dependencies etc.
What I have done so far is
correctly created the build chain by creating a snapshot dependency on the deployment build configuration and
created a Finish Build Trigger
At the moment, the current approach kickoffs the dependent build 'Delete Azure Staging Web' (B) immediately after the deployment build has succeeded. However, I would like this to be a delayed build after x days.
Looking at the above build chain, I would like the build B to run on 13-Aug-2016 at 7.31am (if x=3)
I have looked into the Schedule Trigger option as well, but am slightly lost as to how I can use it to achieve this. As far as I understand, using a cron expression will result in the build continuously running which is not what I want - I would like for the build B to only execute once.
Yes this can be done by making use of the REST api.
I've made a small sample which should convey the fundamental steps. This is a PowerShell script that will clear the triggers on another build configuration (determined by the parameter value in the script) and add a scheduled trigger with a start time X days on from the current time (determined by the parameter value in the script)
1) Add a PowerShell step to the main build, at the end and run add-scheduled-trigger as source code
2) Update the parameter values in the script
$BuildTypeId - This is the id of the configuration you want to add the trigger to
$NumberOfDays - This is the number of days ahead that you want to schedule the trigger for
There is admin / admin embedded in the script = Username / Password authentication for the REST api
One this is done you should see a scheduled trigger created / updated each time you build the first configuration
Hope this helps

How can I have Jenkins run a script on another server and watch over the result?

I want Jenkins to be able to kick off a deployment process on another server, but I want a process on that server to actually do the deplyoment process (for security reasons e.g. rather than having a user on the jenkins server ssh to the remote server and perform the deployment) and report the status of the deployment back to Jenkins.
It looks like the way to do this is to leverage Jenkins ability to have slave processes which can run on other servers, executing scripts and reporting console output, status, etc back to the master server.
So the sequence would be:
jenkins (master)
1. build ->
2. create artifacts ->
3. kick off deployment (using ssh kick off of slave job)jenkins (slave)
4. perform deployment ->
5. report status back to master jenkins server
Is this the appropriate way to achieve what I want to do?
This is basic "Client - Server" behavior in Jenkins -
see Managing Jenkins > Managing Nodes for that.
A) Install a Jenkins-agent on the remote machine.
B) Make this agent run "Tied Jobs" only.
C) Set step '4.' in your example to run on the remote Agent (or 'Node').
D) Step '3.' in your example should probably be a 'post-build-trigger' of step '2.' (and not a step of its own).
The result of step '4.' in your example will be available to the server-machine.