How to implement just in time access for a deployment server? - azure-devops

BACKGROUND
We are about to set up a deployment server that will be used to manage Azure resources. The deployment server will run pre-defined PowerShell scripts and deploy ARM-templates.
This article describes how to use service principals and keys vaults so that the application that runs inside the deployment server securely can execute deployment scripts.
PROBLEM
Frequently, the deployment server will be updated with scripts, new pipelines, different types of configuration, code snippets, templates etc. When changes are made on the deployment server, we do not want the secrets to being exposed in any way.
A JUST IN TIME APPROACH – CUSTOM ACCESS KEY API
The functionality we are looking for can possibly be implemented with a custom access key API with the flowing workflow:
In a service request portal, a deployment ticket is signed by an
approver
The deployment server receives the signed deployment ticket
The deployment server sends the signed ticket to a custom access
key API and receives a temporary service principal and access key
The deployment server executes scripts (with the temporary service
principal)
The temporary service principal and access key is automatically
removed
WHY A CUSTOM ACCESS KEY API?
The custom access key API adds the following capabilities:
By comparison to a deployment server, the API has a smaller footprint and we believe that updates to the service will be rare and can be done in a very controlled manner.
The API can give access to the deployment server based on the exact need (subscription, resource group, etc)
The API can use digital signatures to verify the original approver of the ticket
RECOMMENDED APPROACH?
What is the recommended approach to implement just in time access for a deployment server?

Related

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

Can't use Managed Service identity (MSI) for App Service deployment with hosted Microsoft agent

We have a release pipeline that is failing with following message:
resource ID for resource type 'Microsoft.Web/Sites' and resource name
'appservicename'. Error: Could not fetch access token for Managed
Service Principal. Please configure Managed Service Identity (MSI) for
virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400,
status message: Bad Request
We have 2 different service connections:
Azure Resource Manager using service principal authentication
Azure Resource Manager using managed identity authentication
The first one works like a charm. However, because the developer wanted to limit admin access on the Azure AD, he tried creating a managed identity authentication service connection which at first glance, since it allowed us to select the App Service, appeared to indicate it's working, until an actual deployment was triggered and it failed per the error message above.
After numerous searches online, I think this answer may be the clue to why this is failing with the managed identity authentication service connection yet succeeding with the service principal connection just fine.
I just want to confirm, is this truly the case? that a hosted agent doesn't support MSI based authentication, which is what we are using… or has that changed?
We are indeed using Microsoft agent pool.
It doesn't make sense for our app service to use a VM at this time. The use case just isn't applicable for the dashboards we have.
As it is written in the docs:
You are required to use a self-hosted agent on an Azure VM in order to use managed service identity
I assume that it was alway like that. Here we are talking abut MSI assigned to VM which serves as build agent. Not MSI which is identity of App Service. Why? Service Connection is an abstraction which makes easy authentication to your Azure Subscription. So it gives identity to VM and then when your perform some action against your Azure thanks to MSI Azure know that can perform that action. Another aption is authentication via Service Principal, but thi can be done from any VM (inlcuding MS Hosted) because it relies on Client Id and Client secret which is kept in service connections. And MSI have to be assigned to particular VM which cannot be done with MS Hosted agents.

Getting started with Vault for existing non-containerized Windows apps

We have a bunch of Windows server applications that currently handle secrets as follows; our apps are in C#.
We store them in settings files in code
We store them encrypted, using a certificate
The servers have this certificate with the private key, so they can decrypt the secret
We're looking at implementing Hashicorp Vault. It seems easy enough to simply replace the encrypt-store-decrypt with storing the secret in Vault in the KV engine, and just grabbing it in our apps - that takes that certificate out of the picture entirely. Since we're on-prem, I'll need to figure out our auth method.
We have different apps running on different machines, and it's somewhat dynamic (not as much as an autoscaling scenario, but not permanent - so we can't just assign servers to roles one time and depend on Kerberos auth).
I'm unsure how to make AppRole work in our scenario. We don't have one of the example "trusted platforms" or "trusted entities", there's no Nomad, Chef, Terraform, etc. We have Windows machines, in a domain, and we have a homegrown orchestrator that could be queried to say "This machine name runs these apps", so maybe there's something that can be done there?
Am I in "write your own auth plugin" territory, to speak to our homegrown orchestrator?
Edit - someone on Reddit suggested that this is a simple solution if our apps are all 1-to-1 with the Windows domain account they run under, because then we can just use kerb authentication. That's not currently the way we're architected, but we've got to solve this somehow, and that might do it nicely.
2nd edit - replaced "services" with "apps", since most of our services aren't actually running as Windows services, just processes. The launcher is a Windows service but the individual processes it launches are not.
How about Group Managed Service Accounts?
https://learn.microsoft.com/en-us/windows-server/security/group-managed-service-accounts/group-managed-service-accounts-overview
Essentially you created one "trusted platform" (to your key vault service).
Your service can still has its own identity but delegation to the gMSA when you want to retrieve the secrets.
For future visibility, here's what we landed on:
TLS certificate authentication. Using Vault, we issue a handful of certs, each will correspond to a security policy/profile, so that any machine that holds that certificate will be able to authenticate and retrieve the secrets they should have access to.
Kerberos ended up being a dead-end for two reasons. The vault.exe agent (which is part of this use case) can't use the native Windows Kerberos SSPI, so we'd have to manage and distribute keytab files. Also, if we used machine authentication, it would blow up our client count (we're using the cloud-hosted HCP Vault, where pricing is partially based on client count).
Custom plugins can't be loaded into the HCP, of course
Azure won't work, it requires Managed Identities which you can't assign to on-prem machines. Otherwise this might have been a great fit

Change service fabric config in live environment

I have configured my service fabric services to use Azure Key Vault for configuration. If, after the app is deployed, I change the config in Key Vault, how do I then restart the affected service so it can pick up the new config value?
Or is there another way altogether?
The best way to handle configuration on SF is use your application parameters file for this, if you use a continuous deployment pipeline like VSTS, you could use release variables to set these values for you and deploy a new version of your configuration file and let SF do the rest.
But in case you still need to use Key vault:
if you are using asp.net core, Using Azure Key Vault to store secrets are like loading configuration files, the values are cached until you reload it.
You can use the IConfigurationRoot.Reload() to reload the secrets from your key vault new values. Check it Here
The trick now is to make it automatically you have to:
Enable Key Vault Logging to track the changes, this will emit logs once you update the key vault. check it here and here .
And then:
Create an endpoint in your API to be called and refresh the secrets. Make it secure to avoid abuse.
Create an Azure function to process these logs and trigger the endpoint
Or:
Create a message queue to receive the command and the system read the message to refresh the settings
Or:
Make a timer to refresh on specific periods(I would not recommended this approach because you might end up with outdated config, but it is easy and useful for quick test scenarios, not production)
Or if you prefer more custom designed solution, you could create your own ConfigurationProvider based on KeyVault and do the cache logic according to your app architecture and you don't have to bother with the rest. Please refer to the Asp.Net source here for this.
The documented way to provide configuration to your services is by using the 'configuration' part of your application package.
As this is versioned, it can be upgraded, without requiring your services to be upgraded or even be restarted.
More info here and here.

Azure Authentication for Deployment - Powershell

For deployment to Azure we need to authenticate, and we have two methods to achieve this.
1. Publish Setting File
Download the publish setting and use the Subscription Name to Set the subscription before deployment
2. Managed Certificate
Create and upload the .cer file to the management Portal.
Use the associated .pfx to be installed in the client machine during deployment, and authenticate with the thumbprints.
Can someone tell me which one is better in terms of security, manageability and ease of usage?