Azure Devops 2019 Batch Changes Does Not Queue Build - azure-devops

We have an on-premises, TFVC setup with Azure Devops 2019. We use the "Classic Editor" to create our pipelines since YAML does not support the TFVC repository:
https://developercommunity.visualstudio.com/t/enable-yaml-for-tfvc/234618
In the "Triggers" section, we have both options checked:
However, the result is that the build never gets queued when a checkin occurs. If I uncheck the "Batch changes" option, then the build gets queued on every checkin (as expected).
We have two build servers with Windows Server 2019. One has two build agents while the other has four. Both servers exhibit the same issue.
I have tried everything I can think of to get the batch feature to work: toggling "Clean" in the "Get sources" task, toggling "Clean" for the solutions, fiddling with Path filters (adding more specific ones, setting higher level ones), ordering of tasks, etc., but nothing seems to work.
Trial and error is not working but I'm not even sure how to debug this issue. Does anyone have any ideas as to where to look to figure out why the batch feature isn't working? Anything else to try?

Related

Azure Devops Nuget restore fails with "unable to load the service index for source"

We have an in-house Azure DevOps 2019 server and I'm currently setting up a build for a new .Net6 solution whose projects reference various packages from both nuget.org and an in-house feed in our ADO server's "Artifacts" area.
With this being .Net6, I'm assuming I have to use the ".Net Core" restore task (black square icon), rather than the older "NuGet" restore task (blue icon)? I've therefore added the former, and configured the pertinent settings as seen below, where "NuGetPackages" is the name of our in-house feed:
When I run the build, this task is failing with the message
error NU1301: Unable to load the service index for source http://***/_packaging/2df3c440-07a5-4c01-8e5c-bfbd6e132f09/nuget/v3/index.json.
The URL of our in-house feed is:
http://***/_packaging/NuGetPackages/nuget/v3/index.json, so why has the feed name in the URL been replaced with a GUID as seen in the error message? Presumably this is why the restore fails.
Incidentally we have numerous .Net Framework 4.x solutions that reference the same packages and build fine. These use the older "NuGet" (blue icon) restore task, but the settings are identical to those in the above image, suggesting that the newer ".Net Core" task is doing something strange.
(As an aside, could someone explain the difference between the "NuGet" task and the ".Net Core" task? Could I still use the older task in my .Net6 build pipeline? I tried it briefly earlier but it complained that msbuild v17 isn't installed and didn't want to continue down that path for fear of breaking the 4.x builds).
Something that hasn't been mentioned here, but has been the source of my pain when trying to restore from a feed in a different project but the same organization was certain settings of the consuming project... (sic. https://developercommunity.visualstudio.com/t/restore-nuget-task-unable-to-load-the-service-inde/1337219)
After adding the pipeline permissions to the NuGet feed (as per https://learn.microsoft.com/en-us/azure/devops/artifacts/feeds/feed-permissions?view=azure-devops#pipelines-permissions), you need to update the Limit job authorization scope... settings in the project which is restoring the feed.
I just spent nearly two days fighting this same NU1301 error, and while my ADO instance is cloud-based and the "latest", i.e., not exactly analogous to your situation, maybe my experience will shed some light.
The tldr; is that there were permission issues for the ADO "project" build service account accessing the "organization" Artifact feed. The output from the DotNetCoreCLI#2 restore task didn't even hint in that direction, but when I dropped-back to using the NuGet restore task, the error messages were more informative and helped me discover the underlying issue.
This info doesn't shed light on the guid/name swap issue you ask, but maybe the guid is an internal ID that is first used to then resolve the name, and if a permissions issue prevents even querying the Artifacts endpoint ...
As for the msbuild v17 comment, I would heed very carefully this advice and your trepidation about messing with the existing builds. To paraphrase that old quip ... it's not really paranoia, if MS has a well-established history of breaking stuff that has worked just fine for a very long time! ;-}
HTH.
SC
Here are the steps that helped me fix this same issue in Visual Studio.
Make sure that you have Owner or Contributor permissions to the Feed in Azure DevOps. Something like this:
Then in Visual Studio make sure that you are signed in using the account that has the permissions from the previous step.
Finally rebuild the solution.
Hopefully this fix your issue too!

Can Azure Devops pipelines, where the build failed, show the user of the last commit when triggered with CI?

I'm doing Visual Studio builds on a self hosted agent, which are currently being triggered by the Continuous Integration setting in an Azure Devops pipeline.
When a build completes, it shows: Triggered by Microsoft.VisualStudio.Services.TFS
It also shows the repository, branch and revision number.
However, it is expected it would show Triggered by , If not showing the correct Azure Devops user, at least showing the Subversion user name, that would be something.
There was an expectation it would be possible to send email notifications to the user of the commit. (Not fool proof that they caused the problem, but the most convenient way to give the responsibility to somebody to make sure any build error gets resolved)
Does anybody know if a solution exists?
In both Classic and yaml pipelines, you can specify a condition for a pipeline step. If you want it to run when the pipeline fails, it will be condition: failed() (in yaml), or Control Options -> Run this task -> Only when a previous task has failed (in Classic). Alternativel, you can check Agent.JobStatus variable.
there's no predefined variable for current committer, but you can easily determine the last commit's author by using svn command, then log it. (any other version control system will have its own CLI that should allow it).
In yaml, it could look like this (using git instead of svn):
steps:
... (your build)
- bash: |
author=`git log -1 --pretty=format:'%ae'` # get last commit author from git
echo "last commiter: $author"
# TODO: send email or other kind of notification
condition: failed()
In classic one:
You are using wrong tool for you task. CI build will be triggered after changes was committed to branch. In that case it is not possible to fix those changes. As a result you will have history where a lot of revisions are not stable.
It might be more suitable for you to use PR policy build. It is designed to validate incoming changes so target branch will be always stable and ready to some deploy. In this case, policy build will be triggered by PR creator so he will be informed about it's result. That can be configured in personal notification settings.
In the end I couldn't Continuous Integration triggers to reliably work. They would always stop working after a short time. I'm surprised I have ran into so many issues with this, but I guess it just isn't that well supported.
Instead, now, I am queue the build via an svn post-commit hook which uses the azure devops REST API.
the REST API has setting, requestedFor":{"id":""}, where you can add the user id (which I also needed a rest api command to find)
A lot of messing around to get to this point, for a feature I expected to 'just work', hopefully this keeps working

Azure DevOps Pipelines Release Copy Files step "The process cannot access the file because it is being used by another process"

I am using Azure DevOps pipelines releases to try to deploy a windows service on premise. Periodically, the windows copy files step will hang and try again every 30 seconds and output "The process cannot access the file because it is being used by another process" as it attempts to copy the build artifacts.
We've ruled out any kind of permission issue. We've tried all sorts of tools to see what might be locking these files up and they don't tell us much.
This has happened before in the past with some other projects I was also trying to release on premise. Sometimes, I am able to just wait an hour or two and redeploy successfully (not exactly a solution I'm satisfied with), but this one particular project, a windows service, seems to be experiencing the issue very, very frequently. Almost every time I try to deploy.
Has anyone else experienced this? Or any word from Microsoft on the issue?
Thanks in advance.
I experienced this issue while trying to create and deploy a release from an existing artifact. So I have a build pipeline on Azure Devops that generates artifacts to be used by the release pipeline. All I did was to make a commit that triggered the build pipeline which generated a new artifact and triggered the release and it worked fine.
It was the first time I experienced this and I had no clue on why it happened.
I'm going to be doing more research and share any thing I find helpful.

Azure DevOps Server hangs when I try to deploy a release - What's the reason/remedy for this?

I cannot deploy a release in Azure DevOps Server (on-premise). Whenever I do, Azure DevOps Server shows a "loading" spinner and hangs:
The spinner is shown forever:
There are two other users in our team who share the same issue I have. I've been assigned Azure DevOps Server Administrator rights, still I cannot deploy, so it isn't a rights issue.
I've been logging into another Windows machine, been using other browsers ... yet, to no avail. I always get to see the spinner only.
Strange enough: Other users in our team can deploy releases in our project flawlessly. Currently, it's only three persons in our team who observe Azure DevOps Server hang. Even we could deploy releases until two weeks ago. No-one has changed anything. It suddenly stopped to work for three of us (me included).
I suppose it's a bug in Azure DevOps Server.
What's causing this behaviour? How can we cope with it?
EDIT:
These are the JavaScript warnings I get to see when using IE 11:
It seems like the error is based on the fact that we separated our boards from the repository in Azure DevOps.
In Azure DevOps we created a common board for all projects, but we left the projects themselves in their own Area. Then we moved the boards' items to the new overall board while we left the repositories at their original location (Area).
This seems to cause the hang when a repository's board gets disabled.
Re-enabling the repository's (now obsolete and redundant) board solves the issue.

Why does GitHub check not reflect Azure Pipelines build status?

I am trying to add Azure Pipelines configuration to an existing project, bundler/bundler. Here is the PR that adds the configuration:
https://github.com/bundler/bundler/pull/6899
As one of the maintainers set up the bundler/bundler project on Azure Pipelines, this PR already triggers a build:
https://dev.azure.com/bundler/bundler/_build/results?buildId=11
Note that the build has a green checkmark and is marked as finished.
(Also note that there are loads of tests failing in the build, as this wasn't tested on Windows before. To make the build succeed anyway - and not all PRs and commits get the red "x" on Github while I am working on fixing the tests, I added || exit 0 at the end of the test command - which works fine on Azure Pipelines)
A feature of Azure Pipelines' GitHub integration is that the build results are shown in Github via a feature called "Check":
https://github.com/bundler/bundler/pull/6899/checks
(A shorter version of that is also included at the end of the PR page: https://github.com/bundler/bundler/pull/6899#partial-pull-merging)
Unfortunately, this check doesn't reflect the build status on Azure Pipelines and is still shown as "in progress":
and
Any idea why the GitHub check doesn't reflect the build status on Azure Pipelines?
What is confusing me further, is that the integration with Azure Pipelines actually worked just fine (check correctly reflects the build status) in the Pull Request that was automatically created by Azure Pipelines when creating the bundler/bundler project: https://github.com/bundler/bundler/pull/6955
But: It also can't really be the Azure Pipelines configuration I created in my PR, because the same configuration also works just fine in my fork: https://github.com/janpio/bundler/pull/6#partial-timeline (see the green checkmark for the bundler task). (On the other hand: Here Azure Pipelines doesn't use the "Check" feature of Github at all)
Great question. The most likely reason is that there some was glitch in the communication between Azure Pipelines and GitHub. It's very rare but sometimes a webhook between GitHub and Azure Pipelines doesn't trigger. There's no way to tell why it happened; it could have been a fault on either side.
Unfortunately, there's no way to re-send a webhook that didn't get delivered. Your only recourse is to rebuild that pull request. If you select the "Rebuild" option (in the ... menu):
Then a new build will be queued and, when it finishes, the status update will be sent back to GitHub. The check in the pull request will then be updated.
A less likely (but definitely possible) reason is that there's a bug in either Azure Pipelines or GitHub. And in this particular case, there was a bug with the code that uploads test results from Azure Pipelines to the test case manager API.
(Thanks for reporting the issue, we're sorry that we had a bit of a problem here, but we're glad that we were able to resolve this.)
Setting the following configuration worked for me: