Problem - there are Vault instances in dev/stage/production (each independent). Now if I need to create/modify a key/value (kv secrets engine) then I can manually do so in each env. This leads to environments being out of sync and as part of the deployment we need to remember to write values to vault (I'm sure you can guess what happens).
What I would like to be able to do is externalize the paths/keys so they can applied to each environment. This could potentially be a file in Git and we just write the file through script in each env but then how do we get the values to use? Further these values are different in each environment and they need a secure location to live.
I have tried to search for this and while there is plenty about managing Vault itself I have not found any solution to managing the data within Vault. Any guidance would be much appreciated.
Related
I am deploying an Azure function app that calls backend api which is secured by API Keys. The app will be deployed using Azure DevOps pipeline; the API key will be stored as secret in KeyVault, i am using Bicep files for infra definition and Yaml pipeline for deploying Infra and Application. Below are questions i have w.r.t. KeyVault updates,
Should pipeline be responsible for updating secrets in KeyVault? If so, is it recommended to maintain Secrets in DevOps variable group (pad locked), or, is there a better/more secure approach?
Should the secrets in KeyVault be updated/maintained manually? Going with this approach our pipelines will be less mature as there would still be manual intervention/not-immutable - consider there is recreation of an environment.
I would like to know the best practices on the above, thanks.
Probably the answer is it depends. Which makes this question one that might get some opinion-based answers. If you look at the Microsoft documentation, it states:
Don't set secret variables in your YAML file. Operating systems often log commands for the processes that they run, and you wouldn't want the log to include a secret that you passed in as an input. Use the script's environment or map the variable within the variables block to pass secrets to your pipeline.
You need to set secret variables in the pipeline settings UI for your pipeline. These variables are scoped to the pipeline in which you set them. You can also set secret variables in variable groups.
Source: Define variables - Set secret variables
Despite this, you should ask yourself the question where the ownership of those secrets lies. And the owner should be the one in charge.
Maintaining these secrets in KeyVault means you don't even need secrets in your pipeline. This means a clear separation of responsibilities.
Maintaining these secrets in your pipeline enables you to update them together with the code that uses them. This ties the secret to the consuming code.
Both are good scenario's depending on where the responsibility lies.
Ok.. so, we have Google Secret Manager on GCP, AWS Secret Manager in AWS, Key Vault in Azure... and so on.
Those services give you libs so you can code the way your software will access the secrets there. They all look straightforward and sort of easy to implement. Right?
For instance, using Google SM you can like:
from google.cloud import secretmanager
client = secretmanager.SecretManagerServiceClient()
request = {"name": f"projects/<proj-id>/secrets/mysecret/versions/1"}
response = client.access_secret_version(request)
payload = response.payload.data.decode("UTF-8")
and you're done.
I mean, if we talk about K8S, you can improve the code above by reading the vars from a configmap where you may have all the resources of your secrets, like:
apiVersion: v1
kind: ConfigMap
metadata:
name: myms
namespace: myns
data:
DBPASS: projects/<proj-id>/secrets/mysecretdb/versions/1
APIKEY: projects/<proj-id>/secrets/myapikey/versions/1
DIRTYSECRET: projects/<proj-id>/secrets/mydirtysecret/versions/1
And then use part of the code above to load the vars and get the secrets from the SM.
So, when I was looking the interwebs for best practices and examples, I found projects like the below:
https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
https://github.com/doitintl/secrets-init
https://github.com/doitintl/kube-secrets-init
https://github.com/aws-samples/aws-secret-sidecar-injector
https://github.com/aws/secrets-store-csi-driver-provider-aws
But those projects don't clearly explain what's the point of mounting my secrets as files or env_vars..
I got really confused, maybe I'm too newbie on the K8S and cloud world... and that's why I'm here asking, maybe, a really really dumb questions. Sorry :/
My questions are:
Are the projects, mentioned above, recommended for old code that I do not want to touch? I mean, let's say that my code already use a env var called DBPASS=mypass and I would like to workaround it so the value from the DBPASS env would be hackinjected by a value from a SM.
The implementation to handle a secret manager is very hard. So it is recommended to use one of the solutions above?
What are the advantages of such injection approach?
Thx a lot!
There are many possible motivations why you may want to use an abstraction (such as the CSI driver or sidecar injector) over a native integration:
Portability - If you're multi-cloud or multi-target, you may have multiple secret management solutions. Or you might have a different secret manager target for local development versus production. Projecting secrets onto a virtual filesystem or into environment variables provides a "least common denominator" approach that decouples the application from its secrets management provider.
Local development - Similar to the previous point on portability, it's common to have "fake" or fakeish data for local development. For local dev, secrets might all be fake and not need to connect to a real secret manager. Moving to an abstraction avoids error-prone spaghetti code like:
let secret;
if (process.env.RAILS_ENV === 'production') {
secret = secretmanager.access('...')
} else {
secret = 'abcd1234'
}
De-risking - By avoiding a tight coupling, you can de-risk upstream API changes in an abstraction layer. This is conceptual similar to the benefits of microservices. As a platform team, you make a guarantee to your developers that their secret will live at process.env.FOO, and it doesn't matter how it gets there, so long as you continue to fulfill that API contract.
Separate of duties - In some organizations, the platform team (k8s team) is separate from the security team, is separate from development teams. It might not be realistic for a developer to ever have direct access to a secret manager.
Preserving identities - Depending on the implementation, it's possible that the actor which accesses the secret varies. Sometimes it's the k8s cluster, sometimes it's the individual pod. They both had trade-offs.
Why might you not want this abstraction? Well, it adds additional security concerns. Exposing secrets via environment variables or via the filesystem makes you subject to a generic series of supply chain attacks. Using a secret manager client library or API directly doesn't entirely prevent this, but it forces a more targeted attack (e.g. core dump) instead of a more generic path traversal or env-dump-to-pastebin attack.
I have installed Deis Workflow v.2.11 in a GKE cluster, and some of our applications share values in common, like a proxy URL e credentials. I can use these values putting them into environment variables, or even in a .env file.
However, every new application, I need to create a .env file, with shared values and then, call
deis config:push
If one of those shared value changes, I need to adjust every configuration of every app and restart them. I would like to modify the value in ConfigMap once and, after changes, Deis restart the applications.
Does anyone know if it is possible to read values from Kubernetes ConfigMap and to put them into Deis environment variables? Moreover, if yes, how do I do it?
I believe what you're looking for is a way to set environment variables globally across all applications. That is currently not implemented. However, please feel free to hack up a PR and we'd likely accept it!
https://github.com/deis/controller/issues/383
https://github.com/deis/controller/issues/1219
Currently there is no support for configMaps in Deis Workflow v2.18.0 . We would appreciate a PR into the Hephy Workflow (open source fork of Deis Workflow). https://github.com/teamhephy/controller
There is no functionality right now to capture configMap in by the init scripts of the containers.
You could update the configMap, but each of the applications would need to run kubectl replace -f path/accessible/for/everyone/configmap.yaml to get the variables updated.
So, I would say yes, at Kubernetes level you can do it. Just figure out the best way for your apps to update the configMap. I don't have details of your use case, so I can't tell you specific ways.
The One Binary principle explained here:
http://programmer.97things.oreilly.com/wiki/index.php/One_Binary states that one should...
"Build a single binary that you can identify and promote through all the stages in the release pipeline. Hold environment-specific details in the environment. This could mean, for example, keeping them in the component container, in a known file, or in the path."
I see many dev-ops engineers arguably violate this principle by creating one docker image per environment (ie, my-app-qa, my-app-prod and so on). I know that Docker favours immutable infrastructure which implies not changing an image after deployment, therefore not uploading or downloading configuration post deployment. Is there a trade-off between immutable infrastructure and the one binary principle or can they complement each-other? When it comes to separating configuration from code what is the best practice in a Docker world??? Which one of the following approaches should one take...
1) Creating a base binary image and then having a configuration Dockerfile that augments this image by adding environment specific configuration. (i.e my-app -> my-app-prod)
2) Deploying a binary-only docker image to the container and passing in the configuration through environment variables and so on at deploy time.
3) Uploading the configuration after deploying the Docker file to a container
4) Downloading configuration from a configuration management server from the running docker image inside the container.
5) Keeping the configuration in the host environment and making it available to the running Docker instance through a bind mount.
Is there another better approach not mentioned above?
How can one enforce the one binary principle using immutable infrastructure? Can it be done or is there a trade-off? What is the best practice??
I've about 2 years of experience deploying Docker containers now, so I'm going to talk about what I've done and/or know to work.
So, let me first begin by saying that containers should definitely be immutable (I even mark mine as read-only).
Main approaches:
use configuration files by setting a static entrypoint and overriding the configuration file location by overriding the container startup command - that's less flexible, since one would have to commit the change and redeploy in order to enable it; not fit for passwords, secure tokens, etc
use configuration files by overriding their location with an environment variable - again, depends on having the configuration files prepped in advance; ; not fit for passwords, secure tokens, etc
use environment variables - that might need a change in the deployment code, thus lessening the time to get the config change live, since it doesn't need to go through the application build phase (in most cases), deploying such a change might be pretty easy. Here's an example - if deploying a containerised application to Marathon, changing an environment variable could potentially just start a new container from the last used container image (potentially on the same host even), which means that this could be done in mere seconds; not fit for passwords, secure tokens, etc, and especially so in Docker
store the configuration in a k/v store like Consul, make the application aware of that and let it be even dynamically reconfigurable. Great approach for launching features simultaneously - possibly even accross multiple services; if implemented with a solution such as HashiCorp Vault provides secure storage for sensitive information, you could even have ephemeral secrets (an example would be the PostgreSQL secret backend for Vault - https://www.vaultproject.io/docs/secrets/postgresql/index.html)
have an application or script create the configuration files before starting the main application - store the configuration in a k/v store like Consul, use something like consul-template in order to populate the app config; a bit more secure - since you're not carrying everything over through the whole pipeline as code
have an application or script populate the environment variables before starting the main application - an example for that would be envconsul; not fit for sensitive information - someone with access to the Docker API (either through the TCP or UNIX socket) would be able to read those
I've even had a situation in which we were populating variables into AWS' instance user_data and injecting them into container on startup (with a script that modifies containers' json config on startup)
The main things that I'd take into consideration:
what are the variables that I'm exposing and when and where am I getting their values from (could be the CD software, or something else) - for example you could publish the AWS RDS endpoint and credentials to instance's user_data, potentially even EC2 tags with some IAM instance profile magic
how many variables do we have to manage and how often do we change some of them - if we have a handful, we could probably just go with environment variables, or use environment variables for the most commonly changed ones and variables stored in a file for those that we change less often
and how fast do we want to see them changed - if it's a file, it typically takes more time to deploy it to production; if we're using environment variable
s, we can usually deploy those changes much faster
how do we protect some of them - where do we inject them and how - for example Ansible Vault, HashiCorp Vault, keeping them in a separate repo, etc
how do we deploy - that could be a JSON config file sent to an deployment framework endpoint, Ansible, etc
what's the environment that we're having - is it realistic to have something like Consul as a config data store (Consul has 2 different kinds of agents - client and server)
I tend to prefer the most complex case of having them stored in a central place (k/v store, database) and have them changed dynamically, because I've encountered the following cases:
slow deployment pipelines - which makes it really slow to change a config file and have it deployed
having too many environment variables - this could really grow out of hand
having to turn on a feature flag across the whole fleet (consisting of tens of services) at once
an environment in which there is real strive to increase security by better handling sensitive config data
I've probably missed something, but I guess that should be enough of a trigger to think about what would be best for your environment
How I've done it in the past is to incorporate tokenization into the packaging process after a build is executed. These tokens can be managed in an orchestration layer that sits on top to manage your platform tools. So for a given token, there is a matching regex or xpath expression. That token is linked to one or many config files, depending on the relationship that is chosen. Then, when this build is deployed to a container, a platform service (i.e. config mgmt) will poke these tokens with the correct value with respect to its environment. These poke values most likely would be pulled from a vault.
I'm using Amazon Web Services to create an autoscaling group of application instances behind an Elastic Load Balancer. I'm using a CloudFormation template to create the autoscaling group + load balancer and have been using Ansible to configure other instances.
I'm having trouble wrapping my head around how to design things such that when new autoscaling instances come up, they can automatically be provisioned by Ansible (that is, without me needing to find out the new instance's hostname and run Ansible for it). I've looked into Ansible's ansible-pull feature but I'm not quite sure I understand how to use it. It requires a central git repository which it pulls from, but how do you deal with sensitive information which you wouldn't want to commit?
Also, the current way I'm using Ansible with AWS is to create the stack using a CloudFormation template, then I get the hostnames as output from the stack, and then generate a hosts file for Ansible to use. This doesn't feel quite right – is there "best practice" for this?
Yes, another way is just to simply run your playbooks locally once the instance starts. For example you can create an EC2 AMI for your deployment that in the rc.local file (Linux) calls ansible-playbook -i <inventory-only-with-localhost-file> <your-playbook>.yml. rc.local is almost the last script run at startup.
You could just store that sensitive information in your EC2 AMI, but this is a very wide topic and really depends on what kind of sensitive information it is. (You can also use private git repositories to store sensitive data).
If for example your playbooks get updated regularly you can create a cron entry in your AMI that runs every so often and that actually runs your playbook to make sure your instance configuration is always up to date. Thus avoiding having "push" from a remote workstation.
This is just one approach there could be many others and it depends on what kind of service you are running, what kind data you are using, etc.
I don't think you should use Ansible to configure new auto-scaled instances. Instead use Ansible to configure a new image, of which you will create an AMI (Amazon Machine Image), and order AWS autoscaling to launch from that instead.
On top of this, you should also use Ansible to easily update your existing running instances whenever you change your playbook.
Alternatives
There are a few ways to do this. First, I wanted to cover some alternative ways.
One option is to use Ansible Tower. This creates a dependency though: your Ansible Tower server needs to be up and running at the time autoscaling or similar happens.
The other option is to use something like packer.io and build fully-functioning server AMIs. You can install all your code into these using Ansible. This doesn't have any non-AWS dependencies, and has the advantage that it means servers start up fast. Generally speaking building AMIs is the recommended approach for autoscaling.
Ansible Config in S3 Buckets
The alternative route is a bit more complex, but has worked well for us when running a large site (millions of users). It's "serverless" and only depends on AWS services. It also supports multiple Availability Zones well, and doesn't depend on running any central server.
I've put together a GitHub repo that contains a fully-working example with Cloudformation. I also put together a presentation for the London Ansible meetup.
Overall, it works as follows:
Create S3 buckets for storing the pieces that you're going to need to bootstrap your servers.
Save your Ansible playbook and roles etc in one of those S3 buckets.
Have your Autoscaling process run a small shell script. This script fetches things from your S3 buckets and uses it to "bootstrap" Ansible.
Ansible then does everything else.
All secret values such as Database passwords are stored in CloudFormation Parameter values. The 'bootstrap' shell script copies these into an Ansible fact file.
So that you're not dependent on external services being up you also need to save any build dependencies (eg: any .deb files, package install files or similar) in an S3 bucket. You want this because you don't want to require ansible.com or similar to be up and running for your Autoscale bootstrap script to be able to run. Generally speaking I've tried to only depend on Amazon services like S3.
In our case, we then also use AWS CodeDeploy to actually install the Rails application itself.
The key bits of the config relating to the above are:
S3 Bucket Creation
Script that copies things to S3
Script to copy Bootstrap Ansible. This is the core of the process. This also writes the Ansible fact files based on the CloudFormation parameters.
Use the Facts in the template.