how to retrieve certificates in VSTS-build if agent is running as "network service" - azure-devops

in the past, we used VSTS build agents, running with domain accounts on on-prem build machines. In such scenario, certificates could be stored into the domain accounts personal store (manually, by logging in once with this account). So a later build could get the certificates by thumbprint for signing e.g. a manifest.
Now, the agents run with "Network Service", because we no longer have a local domain (all moved to Azure AD). All works, except the retrieval of certificates from the store. I already used the mmc snap-in to connect to the service (VSTSAgent), and installed certificates to this personal store, but still the build fails with "Error MSB3323: Unable to find manifest signing certificate in the certificate store.".
If I log-on to the machine and run from within VS, all works well, but of course here I am using a different account (with a different personal store), but this at least tells me that solution & projects are fine. And the pipelines are OK as well, because they still work OK on the "old" build-machines that use a domain account.
So, if anyone has an idea or can point me to some information on how to use the VSTSAgent running as "Network Service" together with signing (from the certificate store), that highly appreciated.
Many thanks, Sebastian

Related

Getting started with Vault for existing non-containerized Windows apps

We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.
How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.
For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit

VSTS Deployment to a deployment group from a UNC share

I am using visualstudio.com Teams Services to build and deploy an ASP.NET website to two Azure VMs.
I have a build which on completion triggers a release to my two servers in a deployment group. When you configure a Deployment Group for Visual Studio Team Services you create an agent that by default runs as NT AUTHORITY\SYSTEM.
If I publish my build artifacts to Azure (the server option) then everything works fine and deployment succeeds to both my VMS. However when using a file-drop I get the following error:
The artifact directory does not exist:
\\MACHINE1\drop\RRStore\20170517.20. It can happen if the password of
the account NT AUTHORITY\SYSTEM is changed recently and is not updated
for the agent.
This is basically saying MACHINE2 cannot access \\MACHINE1\drop due to permissions. In windows I can bring up this folder just fine, but since the agent is running as NT AUTHORITY\SYSTEM it cannot access it.
I want to use a filedrop because my website is about 250MB (although in the meantime I am using the 'publish to server' option and deploying via team services.)
I am unclear how to give permissions to the file drop though as the agent is running as SYSTEM. I am running as a WORKGROUP and giving permissions to 'Everyone' does not seem to work.
What is the correct way to configure access to a VSTS drop folder so that the deployment agent can access it?
Few possible options:
Set up a domain (I tried doing this but then I need a new network interface and it sounds klunky)
Continue using teamservices to deploy the artifacts (or reduce the website size!)
Save to a storage account, but again I'm not sure how to configure that.
Run as a different user account
I have similar problems when deploying with VSTS. Instead I chose to:
Run VSTS agent on the deployment group VM as a local user with limited access.
Impersonate the account on the deployment group VM to test its access to the drop folder.
Save/cache a different credential to access the drop folder if applicable.
(So the sensitive information stays on the VM.)
The cached credentials can be a different local user account created on the drop server just for this purpose.
Grant the local user access to various parts of the file system explicitly to limit access permission of this VSTS agent service runner account.
This should work in most cases. In fact, this same way is used in my VSTS, Jenkins and TFS instances. This should prevent you from setting up a domain to solve this problem.
This may not be the best practice, but at least it should get you started in the right direction.

TFS 'Powershell on Target Machines' task for machines in different AD domain

We want to utilize TFS release management for our deployments. We have several environments (dev, qa, staging, prod). Each of them in separate AD forest. Build machine also resides in separate forest. No trust between them.
I set up target machines to accept CredSSP authentications for PS remoting. I was able to enter PS session on target machine from build machine. But no luck from TFS task 'Powershell on Target Machines'.
Here how my tasks looks in TFS:
TFS PS on Target Machines task
In logs:
2016-12-30T15:04:11.0279893Z System.Management.Automation.Remoting.PSRemotingTransportException: Connecting to remote server app.dev.local failed with the following error message : WinRM cannot process the request. The following error with errorcode 0x80090322 occurred while using Negotiate authentication: An unknown security error occurred.
Is there any way to make TFS run PS on target machines that resides outside of build machine AD domain?
AD trust doesn't look like an option. And without proper PS remoting it doesn't look like release management can provide much value for us.
TL;DR;
No, you have two options.
Setup one way trust between your primary domain ans all of your sub domains so that your production domain credentials can be used on all of your sub domains.
use shadow accounts to allow cross domain authentication. These are local accounts with the same username and password across machines that allows auth. This is the official MSFT work around for non trust domain auth.
The long answer
Other than that, since you are well off the supported happy path, you would need to implement your own custom tasks that facilitated the cross domain authentication that you want. Should be a fairly simple task to implement your own tasks in PowerShell.
https://www.visualstudio.com/en-us/docs/integrate/extensions/develop/add-build-task
The reality is that there are only a few limited senarios that you need a "test AD" environment and it is never correct to have domains for Dev, QA, or Staging. AD is not designed that way and I have never seen it work for the benefit of the organisation or the development efforts. It is a product of over paranoid sysadmins and it is a lost cause.
The only reason to have a permanent additional domain is for your sysadmins to test their domain changes and configurations.
For software development projects that actively change AD, or require specific setups for testing, you would dynamically create your test domain along with the test machines required. That is how you create valid and repeatable tests against a Domain.

BizTalk AS/2 implementation certificates

I cannot add any certificates on AS2 messages in BizTalk.
So here's what I have for the moment (I have installed 2 certificates on the BizTalk machine using the same account as the on under which the Host Instance is running.
The 2 certificates are the following and placed in the locations:
\Personal\Certificates - My own certificate 'pfx'.
\Other People\Certificates - Party certificate 'cer'.
So far the importing of the certificates.
Now, when in BizTalk Administration, I go to Parties and I go on the agreement between the parties. In that window I go down to 'Signature certificate' and I check "Override group signing certificate". Then when I click "browse" I see:
"No certificate available."
"No certificates meet the application criteria".
Any idea on what's wrong here?
I've found it. The certificates should be installed under the same instance that the BizTalk Administration Console is openend. Otherwise the certificates could not be found.

How do you install a certificate in a PFX file in to the personal container of the NT-AUTHORITY\NetworkService?

I have a .PXF file used to strongly name several of our .NET assemblies. VS2010/MSBUILD seems to expect this to be in the personal container for the user account running VS2010/MSBUILD. This is all just fine and dandy when working in an interactive user account, but when atempting an automated build via TFS 2010 on the build agent the account used by the build agent (by default) is NT-AUTHORITY/NetworkService.
Since I cannot log in an interacive session as NetworkService I cant just install the PFX from an interactive sessions shell.
So can anyone tell me how I install a PFX certificate in the personal cert store of the NetworkService account?
Answer Courtesey of Richard Reposed from serverfault
You need to open the Network Service certificate store, and add it.
To open the store:
From Start | Run: mmc.exe
File | Add/Remove Snapins and select Certificates then Add.
When prompted for the type of account select Service Account
Select local/remote computer as required
Select any service that's running as Network Service
("Remote Procedure Call (RPC)" run as Network Service by default)
Finish the wizard and OK to close the add/remove dialog.
On the applicable catrgory right click and select add tasks to find the import etc. operations.