We are trying to setup Release (continuous deployment) from our VSTS in the cloud. After the build is done, the Hosted Agent VS2017 tries to deploy the artifacts to the target server.
Firstly, it failed because our firewall blocked the target server from receiving the artifact (a .zip containing all the stuff). In fact, if I connect to the server via RDP and try to download the artifact from a browser, it's blocked.
Our security team temporarily disabled this firewall rule, and it worked (this also means the hosted agent has line of sight for the target server). Now, they don't want this rule off, they would like to know what is the User Account that tries to download/publish the artifact from the hosted agent, so they would allow the download of the .zip only for that specific user. I'm not sure if it's the same account which runs the service in the Host Agent, or if it's Network Service (therefore the own target server credentials), os some other account.
How do I know what user account should be granted rights in our firewall to download anything?
Use the Windows Machine File Copy task, you can provide a username/password to use for copying the files.
However, it uses RoboCopy over SMB to copy the files. As a result, it's probably going to be safer to set up a private agent within your network that has line-of-sight to the target servers, rather than opening up holes in your firewall a whole slew of ports.
Related
I am having trouble deploying files to my servers through the Release Pipelines.
I need to copy files to a Windows and a Linux server. I have tried using the file copy and the ssh file copy tasks, but they seem to be getting blocked because the microsoft servers aren't in my firewall whitelist. What is worse is that I can't seem to get a reliable list of IP's that I need to whitelist, and even if I did it seems they change over time.
So, any advice appreciated.
Also, I am a bit confused about the azure agent. My understanding was that you install them on the servers so that you don't need to worry about firewall issues. I just have the feeling I am missing something. I have no idea what that agent is doing at the moment - it certainly doesn't seem to be helping with the file deploy.
Thanks in advance!
Deploy issue with Azure Release Pipeline
Self-hosted agent: An agent that you set up and manage on your own to run jobs is a self-hosted agent.
To resolve this issue, you could create your private agent, then you can add the IP address of the machine where your private agent deployed to the firewall whitelist of your server machine.
In this case, Azure Release Pipeline runs on your private agent, and the IP of the machine where the private agent is located is added as a whitelist, so that it will not be blocked by the firewall of Windows and Linux servers.
You could refer the document Self-hosted agents to create your private agent.
I am using visualstudio.com Teams Services to build and deploy an ASP.NET website to two Azure VMs.
I have a build which on completion triggers a release to my two servers in a deployment group. When you configure a Deployment Group for Visual Studio Team Services you create an agent that by default runs as NT AUTHORITY\SYSTEM.
If I publish my build artifacts to Azure (the server option) then everything works fine and deployment succeeds to both my VMS. However when using a file-drop I get the following error:
The artifact directory does not exist:
\\MACHINE1\drop\RRStore\20170517.20. It can happen if the password of
the account NT AUTHORITY\SYSTEM is changed recently and is not updated
for the agent.
This is basically saying MACHINE2 cannot access \\MACHINE1\drop due to permissions. In windows I can bring up this folder just fine, but since the agent is running as NT AUTHORITY\SYSTEM it cannot access it.
I want to use a filedrop because my website is about 250MB (although in the meantime I am using the 'publish to server' option and deploying via team services.)
I am unclear how to give permissions to the file drop though as the agent is running as SYSTEM. I am running as a WORKGROUP and giving permissions to 'Everyone' does not seem to work.
What is the correct way to configure access to a VSTS drop folder so that the deployment agent can access it?
Few possible options:
Set up a domain (I tried doing this but then I need a new network interface and it sounds klunky)
Continue using teamservices to deploy the artifacts (or reduce the website size!)
Save to a storage account, but again I'm not sure how to configure that.
Run as a different user account
I have similar problems when deploying with VSTS. Instead I chose to:
Run VSTS agent on the deployment group VM as a local user with limited access.
Impersonate the account on the deployment group VM to test its access to the drop folder.
Save/cache a different credential to access the drop folder if applicable.
(So the sensitive information stays on the VM.)
The cached credentials can be a different local user account created on the drop server just for this purpose.
Grant the local user access to various parts of the file system explicitly to limit access permission of this VSTS agent service runner account.
This should work in most cases. In fact, this same way is used in my VSTS, Jenkins and TFS instances. This should prevent you from setting up a domain to solve this problem.
This may not be the best practice, but at least it should get you started in the right direction.
I am using VSTS (Visual Studio Team Services, formerly known as Visual Studio Onine) for continuous deployment to an Azure VM using an Azure File Copy task in my build definition.
The problem I am having is that I have an ACL setup on the Azure VM that is only allowing connections from my office for Remote Powershell.
With the ACL in place, the Azure File Copy task fails with an error like "WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that the firewall exception for the WinRM service is enabled and allows access from this computer." With the ACL removed, everything works.
To be clear, this is not a problem with WinRM configuration or firewalls or anything like that. It is specifically the ACL on the VM that is blocking the activity.
So the question is, how can I get this to work without completely removing the ACL from my VM? I don't want to open up the VM Powershell endpoint to the world, but I need to be able to have the Azure File Copy task of my build succeed.
You can have an on-premises build agent that lives within your office's network and configure things so that the build only uses that agent.
https://msdn.microsoft.com/library/vs/alm/release/getting-started/configure-agents#installing
Azure File Copy Task need to use WinRM Https Protocol, so when you enable the ACL, the Hosted Build Agent won't be able access to the WinRM on Azure VM and that will cause Azure File Copy Task fail.
When copying the files from the blob container to the Azure VMs,
Windows Remote Management (WinRM) HTTPS protocol is used. This
requires that the WinRM HTTPS service is properly setup on the VMs and
a certificate is also installed on the VMs.
There isn't any easy workaround for this as I know. I would recommend you to setup your own build agent in your network that can access to Azure VM WinRM.
I've checked that the TeamCity user has access to the network share in question.
All packages from the public NuGet feed are found correctly while packages available on the network share are not.
We use the network share when building via Visual Studio with the exact same path without a problem.
I've tried using "file://ratchet/NuGetRepository" but that doesn't make a difference.
TeamCity log entries and screenshot of the build step configuration shown below:
NuGet command: E:\BuildAgent01\plugins\nuget-agent\bin\JetBrains.TeamCity.NuGetRunner.exe E:\BuildAgent01\tools\NuGet.CommandLine.DEFAULT.nupkg\tools\NuGet.exe restore E:\BuildAgent01\work\95323b7041b60513\MySolution.sln -Source https://nuget.org/api/v2/ -Source \\ratchet\NuGetRepository\
Was able to solve this by specifying the fully qualified name of the network share, e.g. \\ratchet.hq.local\NuGetRepository.
Since the accepted answer did not provide a solution for my setup, I'd like to post what did allow TeamCity to access my network share.
First, a very important note: TeamCity Build Agent may either run as a Windows service or directly in command prompt. For my machine, this had the following consequences:
When run as a Windows service, the build agent was logged in as LocalSystem. For our network share, my machine's credentials were not given permissions.
Note: while this SO thread indicates that the network share can be configured to allow the machine's LocalSystem account to have permission, this was NOT an option for me.
When run in command prompt, the build agent will use the security context of whoever runs it (for me, it was my domain user). Again, for our network share, all domain users are given permissions.
The quick solution was to simply run the build agent in command prompt and call it a day; however, I did really want to run the build agent as a Windows service, since I think it is a cleaner approach.
Here's my solution:
First, I needed to grant my domain user the privilege to log on as a service. This is needed to run the service with my domain user's security context. I navigated to User Rights Assignment within Local Security Policy:
Control Panel -> Administrative Tools -> Local Security Policy -> Local Policies -> User Rights Assignment
Next, I added my domain user to the Log on as a servcie setting. For this, I made sure to include the domain with my user name.
Now that my domain user's security context can be used when starting a service, I navigated to Services (services.msc), located TeamCity Build Agent, and edit its properties:
Now, when relaunching the TeamCity Build Agent Windows service, it would be able to access the network share since it was using the security context of my domain user. I can now access the Nuget repository on our shared drive and keep the build agent running in the background.
You can include the package sources in NuGet.targets file. Just find the commented lines and add your path.
<PackageSource Include="https://nuget.org/api/v2/" />
<PackageSource Include="\\ratchet\NuGetRepository\" />
My build agents are not starting after I change the properties credentials to the domain account from the network service. I done this because the network service account cannot write to my drop folder.
Each time I add the network service to the drop folder share, it appears then disappears.
http://msdn.microsoft.com/en-us/library/bb778394.aspx I followed this but some steps are different, i have xp and it doesn't show the share tab so i go through security tab
So I guess I'm asking two questions here;
Agents are not starting after changing credentials.
Network service not able to write to the drop folder.
Thanks in advance
Yes, Network Service won't have permissions to write to a drop location. That's pretty standard. You need to be using a domain account.
The TFS Build Service will need to run as a domain user so it can write to the drop location.
The domain account for the build agent will need to be in the TFS Project Collection group for build service accounts (internal to TFS). I can't remember what it's actually called but you need to be a collection administrator to update it.
The domain account will also need some login as batch/service permissions but that should be done automatically when you reconfigure the service. Do you use the TFS Admin console to reconfigure the agent or did you just set the credentials on the service? (You should use the TFS Admin console).