Can we copy files from $(System.DefaultWorkingDirectory) to Azure Iaas server using Agent Pool “Hosted VS2017” - azure-devops

We are using CI/CD pipeline in OneITVSO. Earlier we had an agent pool which was internally created. Now we are asked to use "Hosted VS 2017". We have a Database solution, ETL solution and Tabular Model solution that needs to get deployed. Additionally we have certain scope scripts.
We are able to build the solution using "Hosted VS 2017". But we are not able to deploy using "Hosted VS 2017" In the release pipeline we have a task "Windows Machine File Copy" which copies either artifacts/dacpac/ispac/.sql files from build server to dev/uat servers.
Using the earlier agent pool this pipeline was getting deployed successfully. But now when we use "Hosted VS 2017" we are getting below error:
Failed to connect to the path \AZDEVSERVERSQL01 with the user ***domain\servicecredentialdwd* for copying. System error 53 has occurred.**
1) Can "Hosted VS 2017" be used for task like "Windows Machine File Copy" (We are using Microsoft Azure Virtual Machine(Iaas) )
2) If we can use "Hosted VS 2017" even for Iaas Azure machines, are we missing any credential access. Should we give any access to domain\servicecredentialdwd for the agent pool "Hosted VS 2017". If so what permissions has to be given and how.
NOTE: Same pipeline gets deployed when "private" agent is used. gets failed when "Hosted VS 2017" is used.

If your IaaS server has a public IP configured, then yes. If not, then no. The build agent has to be able to establish a network route to your virtual machine. If the VM is isolated in a private network, then the build server can't send traffic to it.

Related

DevOps Release Pipeline on-premise TaskModuleSqlUtility

I am trying to use DevOps Release Pipeline to release SQL code onto an on-premise SQL Server.
I am able to do this successfully by adding the DevOps agent to one of my servers - But when I try to deploy to a on-premise SQL Server which uses a linked-server to a Sybase database (nothing do do with the database I'm deploying to, the Linked Server is used by a different database on the server) I get the below error in DevOps during the release.
The server I am trying to deploy to has a 'LIB' environment variable in Windows which references "D:\Sybase%SYBASE_OCS%\lib" as the Sybase dlls are used on the machine.
Release Error:
2021-02-19T12:48:29.0053647Z ##error : Warning as Error: Invalid
search path 'D:\Sybase%SYBASE_OCS%\lib' specified in 'LIB environment
variable' -- 'The system cannot find the path specified. '
(1) : namespace TaskModuleSqlUtility
Is there something I can add to the build or change on the server to get it to deploy?
It seems that you are using self-hosted agent, it need we configure the local machine environment variable.
We need check the environment variable and ensure that the variable path D:\Sybase%SYBASE_OCS%\lib exists.
Steps: Open environment variable-> change the LIB variable in both the user and system variables list.

Agent version 2.173.0 fails to connect to Azure DevOps

Agent Version and Platform
2.173.0
on
centos-release-7-6.1810.2.el7.centos.x86_64
It's a release agent for a deployment pool.
Azure DevOps Type and Version
dev.azure.com (cloud)
What's not working?
# Running run once with agent version 2.160.1
./run.sh --once
Scanning for tool capabilities.
Connecting to the server.
2020-08-25 21:31:02Z: Listening for Jobs
Agent update in progress, do not shutdown agent.
Downloading 2.173.0 agent
Waiting for current job finish running.
Generate and execute update script.
Agent will exit shortly for update, should back online within 10 seconds.
‘/root/azagent/_diag/SelfUpdate-20200825-213148.log’ -> ‘/root/azagent/_diag/SelfUpdate-20200825-213148.log.succeed’
Scanning for tool capabilities.
Connecting to the server.
# this now runs indefinitely
Is there a way to stop the auto update? Multiple agents on production machines are offline and I have, as of now, no idea how to fix that.
agent.log
Edit: It is a Release Agent in a Deployment Group. Also, there is a Github issue now https://github.com/microsoft/azure-pipelines-agent/issues/3093
To resolve the Authentication failed with status code 401 you can try steps below:
1.Create a new PAT with manage permission:
Then reconfigure the agent with config.sh file.
2.If that not works, try creating a new Agent pool to register new agents:
To stop the auto update, you should disable this option (Organization settings=>Agent Pools=>Settings):

Azure DevOps Agent won't start and shows: Error 1 Incorrect Function - Service could not start

I configured the build agent as a service but when I go to start the agent I get the error:
Error 1 Incorrect Function - Service could not start
Azure DevOps Agent configured as a service but service does not start
I changed my user to Local System (NT AUTHORITY\SYSTEM) in services.msc and it worked as intended.
Copied from the comments:
ok ill answer my own question, when the config.cmd command is run, it
allocates the network service as the account to run the service.
However it does NOT automatically give permissions to where the agent
folders are installed. So it fails to run. Stupid as this should be
flagged when running the config.cmd command! The error message is
nonsense and misleading. So if the agent is in c:\users\abc\agent you
need to give the network service permissions to access that folder!
Running from C:\Agent worked perfect for me after struggling for a few days.
This happens when the service Azure Pipelines Agent in the machine is set with Log On As NETWORK SERVICE.
To resolve this, we need to
Right Click on the Azure Pipelines Agent
Properties
Click on LogOn tab
select Local System Account
The final output should be as per the attached image.

Deployment of Windows Services on Remote Servers (Different Domain)

Is there a simpler way of deploying Windows Services from TFS than using a Powershell script, run on the TFS server, which:
Stops the existing Windows Service on the remote server
Copy the file on a shared folder on the remote server (copy-item)
Starts the Windows Service on the remote
If not, can any other continuous integration/deployment tool do this better?
As the TFS server is using a domain controller which is different from the remote server, can we share a folder for a specific user? I tried to run the powershell script as a user from the target domain controller, but of course, it is not recognized as a valid user on TFS server.
At last, is there any difference on deploying on an hosted remote server or on the cloud?
Thanks,
In tasks based build system (TFS 2015 +), you can try to install Windows Service Release Tasks, which contains tasks to start and stop windows services as well as change the startup type.

How do I deploy service fabric application from VSTS release pipeline?

I have configured a CI build for a Service Fabric application, in Visual Studio Team Services, according to this documentation: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration
But instead of having my CI build do the publishing, I only perform the Build and Package tasks, and include all Service Fabric related output, such as pkg folder, scripts, publish profiles and application parameters, in the drop. This way I can pass it along to the new Release pipeline (agent-based releases) to do the actual deployment of my service fabric application.
In my release definition I have a single Azure Powershell task, that uses an ARM endpoint (with proper service principals configured).
When I deploy my app to an existing service fabric cluster, I use the default Deploy-FabricApplication cmdlet passing along the pkg folder and a publish profile that is configured with a connection to the existing cluster.
The release fails with an error message "Cluster connection instance is null". And I cannot understand why?
Doing some debugging I have found that:
The Deploy-FabricApplication cmdlet executes the Connect-ServiceFabricCluster cmdlet just fine, but as soon as the Publish-NewServiceFabricApplication cmdlet takes over execution, then the cluster connection is lost.
I would expect that this scenario is possible using the service fabric cmdlets, but I cannot figure out how to keep the cluster connection open during depoyment.
UPDATE: The link to the documentation no longer refers to the Service Fabric powershell scripts, so the pre-condition for this question is no longer documented. The article now refers to the VSTS build and release tasks, which can be prefered over the powershell cmdlets I tried to use.
When the Connect-ServiceFabricCluster function is called (from Deploy-FabricApplication.ps1) a local $clusterConnection variable is set after the call to Connect-ServiceFabricCluster. You can see that using Get-Variable.
Unfortunately there is logic in some of the SDK scripts that expect that variable to be set but because they run in a different scope, that local variable isn't available.
It works in Visual Studio because the Deploy-FabricApplication.ps1 script is called using dot source notation, which puts the $clusterConnection variable in the current scope.
I'm not sure if there is a way to use dot sourcing when running a script though the release pipeline but you could, as a workaround, make the $clusterConnection variable global right after it's been set via the Connect-ServiceFabricCluster call. Edit your Deploy-FabricApplication.ps1 script and add the following line after the connection logic (~line 169):
$global:clusterConnection = $clusterConnection
By the way, you might want to consider setting up custom build/release tasks that deploy a Service Fabric application, rather than using the various Deploy-FabricApplication.ps1 scripts.
There now exists a built-in VSTS task for deploying a Service Fabric app so you no longer need to bother with executing the PowerShell script on your own. Task documentation page is at https://www.visualstudio.com/docs/build/steps/deploy/service-fabric-deploy. The original CI article has also been updated which provides details on how to set everything up: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/.
Try to use "PowerShell" task instead of "Azure PowerShell" task.
I hit the same bug today and opened a GitHub issue here
On a side note, VS generated script Deploy-FabricApplication.ps1 uses module
"$((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Service Fabric SDK" -Name "FabricSDKPSModulePath").FabricSDKPSModulePath)\ServiceFabricSDK.psm1"
That's where Publish-NewServiceFabricApplication comes from. You can check the deployment logic and rewrite it in more sane way using lower-level ServiceFabric SDK cmdlets (potentially getting connection using Get-ServiceFabricClusterConnection instead of global-ling it)