TFS2010 and lazy build agents - build-server

We have a TFS2010 setup with a single controller and 2 agents running on the same build machine. Yesterday the build server stopped running 2 concurrent builds and just let one agent do the work. I've tried to restart the controller and agents but with no lock. There's no pattern and both agents are doing work - just one at a time. I've added a new agent today (same machine) and it can now pick up 2 concurrent builds - still got one lazy agent. Any thoughts?
New Info:
When I have 2 running builds and a couple in the queue (NB with 3 agents in total) and I change the priority to high - it starts to build on the last agent!?

Ok - so an invalid entry in tbl_BuildQueue in the TFS database was the reason.
Normal Priority Builds Will Not Build in TFS 2010
Quick fix is to delete the entries in tbl_BuildQueue which have a DefinitionId that doesn't exist.
SELECT * FROM [Tfs_Default].[dbo].[tbl_BuildQueue] where DefinitionId not in (select DefinitionId from tbl_BuildDefinition)

There are a few things you can check:
Are any of the Build Definitions configured to used Agent Tags or Agent Name Filters?
Are any of the Agents Configured with Tags? You can check in TF Admin Console.
Check the status of each agent using "Build" -> "Manage Build Controllers..." from Visual Studio.
Check the status of each agent using TF Admin Console on the agent.
Is the TF Admin Console reporting any Events in the last 24 hrs?

We are currently working with a customer to resolve an issue that can leave agents orphaned to a build which is no longer running. This occurs due to a race condition in a stored procedure and has nothing to do with missing foreign key relationships.
If you would like to verify that this has in fact occurred, run the following query on your project collection database:
SELECT *
FROM tbl_BuildAgent ba
LEFT JOIN tbl_BuildAgentReservation bar
ON bar.ReservationId = ba.ReservationId
WHERE ba.ReservationId IS NOT NULL
AND bar.ReservationId IS NULL
If this returns any rows, you can temporarily fix the issue by setting the 'ReservationId' column the affected build agents back to NULL. After updating this column any new builds queued after the update will be able to utilize the agent which was previously "lazy" as you put it.
Patrick

Related

How to pull code from different branches at runtime and pass parameter to NUnit.xml file?

We recently moved Java(TestNG) to C#.Net (NUnit). Sametime migrated to Jenkins to Team-city. Currently we are facing some challenges while we configuring the new build pipeline in Team-City.
Scenario 1: Our project has multiple branches, we generally pull code from different Git-branches then trigger the automation.
In Jenkins we used to create build-parameter(list), when user try to execute the Job, he/she select the branch-name from the list (build-parameters), then git will pull code user selected branch then trigger execution.
Can you please help how to implement a similar process in Team-City?
How to configure the default value in the list parameter?
Scenario 2: In Jenkins build-parameter use used to pass to (TestNG.xml). eg: browser, environment. When the user select browser and environment from build parameters, when execution trigger TestNG pull those values and initiate the regression.
How should create build parameters (browser, envi.) and pass those
values to NUnit/ config file?
Thanks
Raghu

Tag resources when registering to the environment

I have a pipeline with multiple stages that deploys groups of virtual machines.
And registers one to and azure pipelines environment.
Then I want to target that registered VM in a deployment job.
I have a problem to target that resource by name as the resource does not exists in the environment at queue time so I cannot even disable the stage before running.
So my next option is targeting by tags.
But I saw no option in the registration script to define tags at registering time.
So my pipeline flow has a manual step between stages to go to the environment and tag the resource.
Then I can trigger the deployments stage of the pipeline and it continues ok.
So my question is:
There is any way of disabling the resource evaluation at queuetime or anyway to tag resoureces in the environment programmatically?
Thanks
But I saw no option in the registration script to define tags at
registering time.
When running the Registration script, there will be a step: Enter Environment Virtual Machine resource tags? (Y/N) (press enter for N), at this time you need to enter Y, and then in the next step: Enter Comma separated list of tags (e.g web, db) define the tag for the resource.
Update:
You can add --addvirtualmachineresourcetags --virtualmachineresourcetags "<tag>" to the registration script.
You can refer to this case for details.

Hot to create a task group with parameterized Azure Resource Manager connection field?

I want to create a task group where Azure Resource Manager Connection is filled with a parameter:
However, this is not possible to do in portal as a validation force to fill it with working value. So I tried to export the task group as json and them modify it and import but then I got this message saving release pipeline:
Is there a way to overcome this? I understood that this is security check (which btw doesn't work in yaml pipelines becauce there you can use Azure Reource Manager connection even if you not allowed). However, in this way it limits usage of task group to a single connection.
EDIT:
Kevin thank you for your anser. I tired it but it didn't work for me.
So I have the connection rg-the-code-manual:
I created a variablewith it:
But when I tried to use it I have a validation error:
Based on my test, when I set the variable as the Azure Resource Manager Connection name, I could reproduce the same issue.
For example:
To solve this issue, you need to set the variable value in release pipeline.
Then you could save the release pipeline successfully.
On the other hand, you could also set the default value for the variable in Task Group.
In this case, the task group will use the default value in release pipeline. And the parameter will also exist in the task group task, you could directly select the value in the drop downlist.
Note: you need to make sure that the Service connection name is valid.

Notification when cant download artifact

We have a scheduled release in Octopus that deploys the last known good release to Prod back to Prod.
However this has started failing because the artifact has fallen out of our retention policy - this we can fix by altering the retention policy.
The real issue is that when it failed no notifications were sent to the team because artifact collection happens before even the first step.
I have tested this with a dummy release that just has a single basic step and then a Slack Notification step for when it fails. However, we never get to the first step - let alone the slack step.
How can i hook on to this failure so that we know about these issues in future.
You have to follow below steps to achieve the same
Step 1) Add Email Template step # First : to inform that Build is triggered
There is a setting in that called : Start Trigger set it to Run in parallel with the previous step so email will be triggered while your artifacts are downloading
Step 2) Add Email Template step # Last : to inform that build failed
Just change the setting Run Condition set it to : Failure: only run when a previous step failed
so when your deployment get fail, It will notify the same. You can add the cause of failure in email body using inbuilt variables also.

Deployment error -

I'm am getting an error when deploying ADF pipelines. I don't understand how to resolve this error message:
Pipeline Populate SDW_dbo_UserProfiles from SDW_dbo_CTAS_ApptraxDevices is in Failed state. Cannot set active period Start=05/30/2017 00:00:00, End=05/29/2018 23:59:59 for pipeline 'Populate SDW_dbo_UserProfiles from SDW_dbo_CTAS_ApptraxDevices' due to conflicts on Output: SDW_dbo_UserProfiles with Pipeline: Populate SDW_dbo_UserProfiles from SDW_dbo_Manifest, Activity StoredProcedureActivityTemplate, Period: Start=05/30/2017 00:00:00, End=05/30/2018 00:00:00
.
Try changing the active period or using autoResolve option when setting the active period.
I'm am authoring and deploying from within Visual Studio 2015. All of my pipelines have the same values for Start and End.
"start": "2017-05-30T00:00:00Z",
"end": "2018-05-29T23:59:59Z"
How do I resolve this issue?
Visual Studio can be fun sometimes when it comes to validating your JSON because not only does it check everything in your solution it also validates against what you already have deployed in Azure!
I suspect this error will be because there is a pipeline that you have already deployed that now differs from Visual Studio. If you delete the affected pipeline from ADF in Azure manually and then redeploy you should be fine.
Sadly the tooling isn't yet clever enough to understand which values should take presedence and be overwritten at deployment time. So for now it simiply errors because of a mismatch, any mismatch!
You will also encounter similar issues if you remove datasets from your solution. They will still be used for validation at deployment time because the wizard first deploys all new things before trying to delete the old. I've fed this back to Microsoft already as an issue that needs attention for complex solutions with changing schedules.
Hope this helps.