Desired state configuration - powershell

I have two web servers and one service server and a database server and all these servers are domain joined. And I have set my private build agent from VSTS from where I can build my artifacts and based on build configuration. And all my DEV,QA and STAGING environments are setup on those servers.
My problem is i am looking for a way using PowerShell Desired state configuration such a way that based on the environment artifacts (DEV,QA and STAGING) the scripts has to copy the artifacts to specific location on those "TWO web-servers" and ensure the website is configured correctly with all the required permissions where these artifacts are used to host IIS website and perform the delete and creation action of particular windows service on "SERVICE service" and should also perform the migration activities on "DATABASE server" for particular database. since I have separated database for each individual environment.
Any kind of help or suggestion would be appreciated. thank you.

My suggestions are:
Don't use DSC for deployment (i.e. deploy applications or databases)
Use DSC for configuration (e.g. install IIS)
Install the VSTS Agent on each server in Deployment Groups mode, running as a service with local administrator privileges
Use the IIS Deploy Tasks designed for Deployment Groups
Use the Powershell Task to manage the Windows Services (tip. help *-Service)

Related

Azure DevOps Pipeline as Code and Deployment Group

I'm facing the challenge to use the same resource (VM in my company) for all my dev environment. That means that multiple apps will be deployed there.
I have:
https://dev.azure.com/mycompany/project1 with a Pipeline as Code for CI/CD & environment called D-Stage
https://dev.azure.com/mycompany/project2 with a Pipeline as Code for CI/CD & environment called D-Stage
Since I can’t use Deployment group, each time I register the VM as a Resource of each project’s pipeline, if I don’t change the registration name, one reg replace another and the last registration is the only one that has connection to the VM.
On the other side if I create a new registration I get a new azure agent per project.
What should be the right way to handle the scenarios since Deployment Group is not supported in YAML files?
If we want to add multiple resources in a VM environment, and the resources refer to the same VM, we need to modify the agent name in the registration script, otherwise the resource with the same agent name will replace the previously registered resource.
By modifying the agent name(--agent $env:COMPUTERNAME) in the registration script, we can register multiple agents in a VM environment:

VSTS What User Account does Hosted Agent run under?

We are trying to setup Release (continuous deployment) from our VSTS in the cloud. After the build is done, the Hosted Agent VS2017 tries to deploy the artifacts to the target server.
Firstly, it failed because our firewall blocked the target server from receiving the artifact (a .zip containing all the stuff). In fact, if I connect to the server via RDP and try to download the artifact from a browser, it's blocked.
Our security team temporarily disabled this firewall rule, and it worked (this also means the hosted agent has line of sight for the target server). Now, they don't want this rule off, they would like to know what is the User Account that tries to download/publish the artifact from the hosted agent, so they would allow the download of the .zip only for that specific user. I'm not sure if it's the same account which runs the service in the Host Agent, or if it's Network Service (therefore the own target server credentials), os some other account.
How do I know what user account should be granted rights in our firewall to download anything?
Use the Windows Machine File Copy task, you can provide a username/password to use for copying the files.
However, it uses RoboCopy over SMB to copy the files. As a result, it's probably going to be safer to set up a private agent within your network that has line-of-sight to the target servers, rather than opening up holes in your firewall a whole slew of ports.

VSTS Deployment to a deployment group from a UNC share

I am using visualstudio.com Teams Services to build and deploy an ASP.NET website to two Azure VMs.
I have a build which on completion triggers a release to my two servers in a deployment group. When you configure a Deployment Group for Visual Studio Team Services you create an agent that by default runs as NT AUTHORITY\SYSTEM.
If I publish my build artifacts to Azure (the server option) then everything works fine and deployment succeeds to both my VMS. However when using a file-drop I get the following error:
The artifact directory does not exist:
\\MACHINE1\drop\RRStore\20170517.20. It can happen if the password of
the account NT AUTHORITY\SYSTEM is changed recently and is not updated
for the agent.
This is basically saying MACHINE2 cannot access \\MACHINE1\drop due to permissions. In windows I can bring up this folder just fine, but since the agent is running as NT AUTHORITY\SYSTEM it cannot access it.
I want to use a filedrop because my website is about 250MB (although in the meantime I am using the 'publish to server' option and deploying via team services.)
I am unclear how to give permissions to the file drop though as the agent is running as SYSTEM. I am running as a WORKGROUP and giving permissions to 'Everyone' does not seem to work.
What is the correct way to configure access to a VSTS drop folder so that the deployment agent can access it?
Few possible options:
Set up a domain (I tried doing this but then I need a new network interface and it sounds klunky)
Continue using teamservices to deploy the artifacts (or reduce the website size!)
Save to a storage account, but again I'm not sure how to configure that.
Run as a different user account
I have similar problems when deploying with VSTS. Instead I chose to:
Run VSTS agent on the deployment group VM as a local user with limited access.
Impersonate the account on the deployment group VM to test its access to the drop folder.
Save/cache a different credential to access the drop folder if applicable.
(So the sensitive information stays on the VM.)
The cached credentials can be a different local user account created on the drop server just for this purpose.
Grant the local user access to various parts of the file system explicitly to limit access permission of this VSTS agent service runner account.
This should work in most cases. In fact, this same way is used in my VSTS, Jenkins and TFS instances. This should prevent you from setting up a domain to solve this problem.
This may not be the best practice, but at least it should get you started in the right direction.

VSTS Azure File Copy task and ACL

I am using VSTS (Visual Studio Team Services, formerly known as Visual Studio Onine) for continuous deployment to an Azure VM using an Azure File Copy task in my build definition.
The problem I am having is that I have an ACL setup on the Azure VM that is only allowing connections from my office for Remote Powershell.
With the ACL in place, the Azure File Copy task fails with an error like "WinRM cannot complete the operation. Verify that the specified computer name is valid, that the computer is accessible over the network, and that the firewall exception for the WinRM service is enabled and allows access from this computer." With the ACL removed, everything works.
To be clear, this is not a problem with WinRM configuration or firewalls or anything like that. It is specifically the ACL on the VM that is blocking the activity.
So the question is, how can I get this to work without completely removing the ACL from my VM? I don't want to open up the VM Powershell endpoint to the world, but I need to be able to have the Azure File Copy task of my build succeed.
You can have an on-premises build agent that lives within your office's network and configure things so that the build only uses that agent.
https://msdn.microsoft.com/library/vs/alm/release/getting-started/configure-agents#installing
Azure File Copy Task need to use WinRM Https Protocol, so when you enable the ACL, the Hosted Build Agent won't be able access to the WinRM on Azure VM and that will cause Azure File Copy Task fail.
When copying the files from the blob container to the Azure VMs,
Windows Remote Management (WinRM) HTTPS protocol is used. This
requires that the WinRM HTTPS service is properly setup on the VMs and
a certificate is also installed on the VMs.
There isn't any easy workaround for this as I know. I would recommend you to setup your own build agent in your network that can access to Azure VM WinRM.

TFS 2010 Build agent not starting

My build agents are not starting after I change the properties credentials to the domain account from the network service. I done this because the network service account cannot write to my drop folder.
Each time I add the network service to the drop folder share, it appears then disappears.
http://msdn.microsoft.com/en-us/library/bb778394.aspx I followed this but some steps are different, i have xp and it doesn't show the share tab so i go through security tab
So I guess I'm asking two questions here;
Agents are not starting after changing credentials.
Network service not able to write to the drop folder.
Thanks in advance
Yes, Network Service won't have permissions to write to a drop location. That's pretty standard. You need to be using a domain account.
The TFS Build Service will need to run as a domain user so it can write to the drop location.
The domain account for the build agent will need to be in the TFS Project Collection group for build service accounts (internal to TFS). I can't remember what it's actually called but you need to be a collection administrator to update it.
The domain account will also need some login as batch/service permissions but that should be done automatically when you reconfigure the service. Do you use the TFS Admin console to reconfigure the agent or did you just set the credentials on the service? (You should use the TFS Admin console).