Azure DevOps hosted agents failing nuget restore - azure-devops

Randomly it seems that Azure DevOps hosted agents cannot restore from nuget. It seems to fail at its first attempt to hit https://api.nuget.org/v3/index.json and then it tries another 3rd party source I have in my nuget.config. Has anyone found a work around for this? Most of the time a rerun will fix it, but other times it take 3 or 4 before it will finally complete. Today I also have a Docker build (in DevOps) failing on the same step and it just won't get past it.

Related

Azure DevOps Artifacts getting deleted

I have a bunch of Azure Pipeline and Release packages. The piplines build and publish my code to a testing environment every night, and as part of that publish they create artifacts used by the selenium testing software.
This has been working correctly for several years now. Sometime around the last upgrade we did to Azure Devops the artifacts have started getting deleted after about a day instead of honoring the deletion schedule that I have setup.
The artifacts are generated by the pipeline packages and used by the release packages, and everything is working correctly. However sometime after the last release has finished running for the night, all the artifacts get deleted. I have tried running the entire process manually, and when it is done the artifacts still exist (at least until the next day - I haven't identified a specific time they get deleted), so the issue does not appear to be within any of the packages themselves.
The Settings Retention policy, I believe these apply to the pipelines.
The Release Retention policy, obviously these should apply to the release packages.
Does anyone have any idea why my artifacts are not sticking around past 1 day?
It seems I have figured out the problem, and it appears to be a change in the way Azure Devops works with the newest version. This is the first version I have noticed this issue in:
Azure DevOps Server 2020 Update 1.1
The settings indicate that artifacts should be kept for 20 days in my instance. However, every time I run a build it over-writes the existing artifacts with the newest ones. My guess is that this artifact path gets tied to the build in the database. And since all artifact paths are identical in my case (something that did not occur to me to mention in the original post), deleting an artifact that is 20 days old will delete the data from the current artifact path as well.
I solved this issue for myself by including $(Build.BuildNumber) as part of my artifact path. Unfortunately you cannot directly use a date variable in that path, however since I have the date built into my buildnumber this approach worked for me.

How to diagnose a problem with Azure DevOps build pipeline without re-running the pipeline every time you make a change?

I have an Azure DevOps pipeline build that has several steps and the build is long. Every time there is something wrong with the build we review the logs and identify issues or come up with theories, then in case of a theory we have to insert a diagnostic command line (such as get directory, show contents of a file, etc) in between the steps; and in case of a fix we add a fix but we have to wait for the whole pipeline to rerun and find out. This is causing us to take a lot of time to fix build issues.
If we had access to the state of the agent of an unfinished build and we could just log on using RDP or any other terminal and checkout the contents, and the state of the files on disk that would have saved us a lot of hours.
Is there any way with Azure DevOps to do any diagnostic of this type?
No, if you are using hosted agent. If you are using self-hosted agent you can obviously log in to that one. You can, however, implement steps that only work if the build failed and those steps can attempt to capture information you are interested in (say publish the state of the build directory).
If you are using Azure DevOps Services, there is a new REST API version out that will let you do a "preview" run of changes to the YAML definitions: https://learn.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-165-update#preview-fully-parsed-yaml-document-without-committing-or-running-the-pipeline

'Can not determine workspaces' error -> Azure DevOps Services and Build Agent on VM

I'm trying to set Build Server machine in way that Build Agent is configured on it, and is targeting Azure DevOps Service (Cloud) collection, or organization as it is named like that for now. There is established connection but problem that I'm faced with is regarding workspaces.
When I try to run build definition, checkout step fails due to "Can not determine workspace..." error. As I run the advised command
tf workspaces /collection:<collection_url>
on Build Server, I can build given project, but, when try another project, the same story. I have to run the mentioned command again (new workspace is appeared in the list) and then I'm able to build that project.
Can someone point me on right way in diagnostic or tell the cause/solution if faced with this already?
According to the description and this thread which i assume is also posted by you,seems the agent in Azure DevOps is the one you used in TFS which is called as migration.
As I run the advised command
tf workspaces /collection:<collection_url>
on Build Server, I can build given project, but, when try another project, the same story.
It looks like the build definition requires a specific workspace which you managed by manual command.
What about create a new agent in Azure DevOps which is quite simple see if the problem can be resolved.

Azure DevOps Pipelines Release Copy Files step "The process cannot access the file because it is being used by another process"

I am using Azure DevOps pipelines releases to try to deploy a windows service on premise. Periodically, the windows copy files step will hang and try again every 30 seconds and output "The process cannot access the file because it is being used by another process" as it attempts to copy the build artifacts.
We've ruled out any kind of permission issue. We've tried all sorts of tools to see what might be locking these files up and they don't tell us much.
This has happened before in the past with some other projects I was also trying to release on premise. Sometimes, I am able to just wait an hour or two and redeploy successfully (not exactly a solution I'm satisfied with), but this one particular project, a windows service, seems to be experiencing the issue very, very frequently. Almost every time I try to deploy.
Has anyone else experienced this? Or any word from Microsoft on the issue?
Thanks in advance.
I experienced this issue while trying to create and deploy a release from an existing artifact. So I have a build pipeline on Azure Devops that generates artifacts to be used by the release pipeline. All I did was to make a commit that triggered the build pipeline which generated a new artifact and triggered the release and it worked fine.
It was the first time I experienced this and I had no clue on why it happened.
I'm going to be doing more research and share any thing I find helpful.

Azure DevOps build pipeline unreliable triggering by schedule

I run build pipelines in Azure DevOps to daily update a Dockerfile and rebuild a container image with updated dependencies. The purpose is to have an up-to-date version of a dependency for the project and release a new artifact in container registry.
In Azure DevOps I have three chained build pipelines. The first pipeline is triggered every day with scheduled trigger. The next two pipelines are triggered with CI trigger file path filters. This all works well, most of the times.
My problem is that sometimes the schedule is not triggered at all. This happens after the pipelines have been running normally for days (ranging from about 1 to 15 days). The checkbox "Only schedule builds if the source or pipeline has changed" is unchecked, so having no commits should not be the problem.
Strange thing after this problem situation is that when I login to Azure DevOps portal the scheduled event is immediately triggered and I can see that the latest daily build starts running. I don't need to start it manually, it starts automatically like it would be scheduled but at the time I logged in.
This project is running with the free version of Azure DevOps. The project and pipelines have been created when Azure DevOps was VSTS and the same triggering problem was also in VSTS. Sometimes I run out of free quota and then I get an error that the agent cannot be started. This is not the case when the scheduled trigger is not running.
What could cause the problem in triggering by the schedule? Have any of you encountered this same problem? How could I debug or resolve this and get my builds running reliably? I cannot find any debug information about the trigger events, only logs from agent after the trigger has already happened. I have not yet recreated the pipelines to find out if "rebooting" helps in this case. That's my next step if no better answers will come up.
Update 07/11/2019:
​We have since updated this logic to give 1 full month of scheduled builds to continue to run without any user activity.
Nightly builds require someone to sign in daily.
From the docs:
My build didn't run. What happened?
Your Azure DevOps organization goes dormant five minutes after the last user signed out. After that, each of your build pipelines will run one more time. For example, while your organization is dormant:
A nightly build of code in your Azure DevOps organization will run only one night until someone signs in again.
CI builds of an external Git repo will stop running until someone signs in again.
https://learn.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=vsts&tabs=yaml