What is difference between vault write and vault kv put? - hashicorp-vault

I'm currently learning about vault. I know what is a secret engine etc and how it works. But I have not found any information about difference between vault write and vault kv put. In my opinion these commands do the same things. Am I wrong?

Basically, write was used in previous versions of the k/v secret engine.
According to the Vault documentaion, it's better to use put. Write may still be avaible and the only difference might be that in k/v version 2 instead of appending secrets it overrides them.

Related

Generate OAS3 API from AWS serverless YAML without deploying?

I am developing a moderately complex (for me) serverless infrastructure on AWS that consists of about 50 lambdas and would like to start fleshing out the API documentation but am finding it very tedious. Right now any time I want to make a minor schema change, documentation change, etc. in my YAML, I am re-deploying the whole Cloud Formation and then regenerating the OAS3 API with
sls deploy
aws apigateway get-export --parameters extensions='apigateway' \
--rest-api-id $API_ID --stage-name dev --export-type oas30 \
latest_changes.json
This is obviously pretty time consuming and I feel like there must be a better way. I poked around with with serverless-documentation plugin, but that still seems to require a redeploy (and only works with OAS2), and I've now started investigating serverless-offline (which I wish I knew about in the past), but before I go down that rabbit hole I wanted to see if there's a better way to do this.
I was ultimately able to do what I want using the serverless-openapi plugin (I'm actually using the pvdlg fork), which uses a custom:documentation style very similar to the OAS2 serverless-aws-documentation plugin from serverless.com. With that plugin installed I can then do sls openapi generate -f json -o oas3api.json to output swagger that's pretty close to fully documented, though I did have to write a small script to ingest the swagger, add in per-path security sections (which the plugin doesn't seem to support) and an info block for things like a contact name and TOS, and spit it back out again.
But at least it means no more waiting for AWS to update the CloudFormation just so I can update my docs!

Need recommendation on updating secrets to KeyVault

I am deploying an Azure function app that calls backend api which is secured by API Keys. The app will be deployed using Azure DevOps pipeline; the API key will be stored as secret in KeyVault, i am using Bicep files for infra definition and Yaml pipeline for deploying Infra and Application. Below are questions i have w.r.t. KeyVault updates,
Should pipeline be responsible for updating secrets in KeyVault? If so, is it recommended to maintain Secrets in DevOps variable group (pad locked), or, is there a better/more secure approach?
Should the secrets in KeyVault be updated/maintained manually? Going with this approach our pipelines will be less mature as there would still be manual intervention/not-immutable - consider there is recreation of an environment.
I would like to know the best practices on the above, thanks.
Probably the answer is it depends. Which makes this question one that might get some opinion-based answers. If you look at the Microsoft documentation, it states:
Don't set secret variables in your YAML file. Operating systems often log commands for the processes that they run, and you wouldn't want the log to include a secret that you passed in as an input. Use the script's environment or map the variable within the variables block to pass secrets to your pipeline.
You need to set secret variables in the pipeline settings UI for your pipeline. These variables are scoped to the pipeline in which you set them. You can also set secret variables in variable groups.
Source: Define variables - Set secret variables
Despite this, you should ask yourself the question where the ownership of those secrets lies. And the owner should be the one in charge.
Maintaining these secrets in KeyVault means you don't even need secrets in your pipeline. This means a clear separation of responsibilities.
Maintaining these secrets in your pipeline enables you to update them together with the code that uses them. This ties the secret to the consuming code.
Both are good scenario's depending on where the responsibility lies.

What is the purpose of using a secret injector in k8s instead of coding in my software the stuff to handle my secrets in a vault like google SM

Ok.. so, we have Google Secret Manager on GCP, AWS Secret Manager in AWS, Key Vault in Azure... and so on.
Those services give you libs so you can code the way your software will access the secrets there. They all look straightforward and sort of easy to implement. Right?
For instance, using Google SM you can like:
from google.cloud import secretmanager
client = secretmanager.SecretManagerServiceClient()
request = {"name": f"projects/<proj-id>/secrets/mysecret/versions/1"}
response = client.access_secret_version(request)
payload = response.payload.data.decode("UTF-8")
and you're done.
I mean, if we talk about K8S, you can improve the code above by reading the vars from a configmap where you may have all the resources of your secrets, like:
apiVersion: v1
kind: ConfigMap
metadata:
name: myms
namespace: myns
data:
DBPASS: projects/<proj-id>/secrets/mysecretdb/versions/1
APIKEY: projects/<proj-id>/secrets/myapikey/versions/1
DIRTYSECRET: projects/<proj-id>/secrets/mydirtysecret/versions/1
And then use part of the code above to load the vars and get the secrets from the SM.
So, when I was looking the interwebs for best practices and examples, I found projects like the below:
https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp
https://github.com/doitintl/secrets-init
https://github.com/doitintl/kube-secrets-init
https://github.com/aws-samples/aws-secret-sidecar-injector
https://github.com/aws/secrets-store-csi-driver-provider-aws
But those projects don't clearly explain what's the point of mounting my secrets as files or env_vars..
I got really confused, maybe I'm too newbie on the K8S and cloud world... and that's why I'm here asking, maybe, a really really dumb questions. Sorry :/
My questions are:
Are the projects, mentioned above, recommended for old code that I do not want to touch? I mean, let's say that my code already use a env var called DBPASS=mypass and I would like to workaround it so the value from the DBPASS env would be hackinjected by a value from a SM.
The implementation to handle a secret manager is very hard. So it is recommended to use one of the solutions above?
What are the advantages of such injection approach?
Thx a lot!
There are many possible motivations why you may want to use an abstraction (such as the CSI driver or sidecar injector) over a native integration:
Portability - If you're multi-cloud or multi-target, you may have multiple secret management solutions. Or you might have a different secret manager target for local development versus production. Projecting secrets onto a virtual filesystem or into environment variables provides a "least common denominator" approach that decouples the application from its secrets management provider.
Local development - Similar to the previous point on portability, it's common to have "fake" or fakeish data for local development. For local dev, secrets might all be fake and not need to connect to a real secret manager. Moving to an abstraction avoids error-prone spaghetti code like:
let secret;
if (process.env.RAILS_ENV === 'production') {
secret = secretmanager.access('...')
} else {
secret = 'abcd1234'
}
De-risking - By avoiding a tight coupling, you can de-risk upstream API changes in an abstraction layer. This is conceptual similar to the benefits of microservices. As a platform team, you make a guarantee to your developers that their secret will live at process.env.FOO, and it doesn't matter how it gets there, so long as you continue to fulfill that API contract.
Separate of duties - In some organizations, the platform team (k8s team) is separate from the security team, is separate from development teams. It might not be realistic for a developer to ever have direct access to a secret manager.
Preserving identities - Depending on the implementation, it's possible that the actor which accesses the secret varies. Sometimes it's the k8s cluster, sometimes it's the individual pod. They both had trade-offs.
Why might you not want this abstraction? Well, it adds additional security concerns. Exposing secrets via environment variables or via the filesystem makes you subject to a generic series of supply chain attacks. Using a secret manager client library or API directly doesn't entirely prevent this, but it forces a more targeted attack (e.g. core dump) instead of a more generic path traversal or env-dump-to-pastebin attack.

Secrets management across environments in HashiCorp Vault

Problem - there are Vault instances in dev/stage/production (each independent). Now if I need to create/modify a key/value (kv secrets engine) then I can manually do so in each env. This leads to environments being out of sync and as part of the deployment we need to remember to write values to vault (I'm sure you can guess what happens).
What I would like to be able to do is externalize the paths/keys so they can applied to each environment. This could potentially be a file in Git and we just write the file through script in each env but then how do we get the values to use? Further these values are different in each environment and they need a secure location to live.
I have tried to search for this and while there is plenty about managing Vault itself I have not found any solution to managing the data within Vault. Any guidance would be much appreciated.

Advantages of Templates ( ie infrastructure as code) over API calls

I am trying to setup a module to deploy resources in the cloud (it could be any cloud provider). I don't see the advantages of using templates (ie. the deploy manager) over direct API calls :
Creation of VM using a template :
# deployment.yaml
resources:
- type: compute.v1.instance
name: quickstart-deployment-vm
properties:
zone: us-central1-f
machineType: f1-micro
...
# bash command to deploy yaml file
gcloud deployment-manager deployments create vm-deploy --config deployment.yaml
Creation of VM using a API call :
def addInstance(http, listOfHeaders):
url = "https://www.googleapis.com/compute/v1/projects/[PROJECT_ID]/zones/[ZONE]/instances"
body = {
"name": "quickstart-deployment-vm",
"zone": " us-central1-f",
"machineType": "f1-micro",
...
}]
bodyContentURLEncoded = urllib.urlencode(bodyContent)
http.request(uri=url, method="POST", body=body)
Can someone explain to me what benefits I get using templates?
readability\easy of use\authentication handled for you\no need to be a coder\etc. There can be many advantages, it really depends on how you look at it. It depends on your background\tools you use.
It might be more beneficial to use python all the way for you specifically.
It's easier to use templates and you get a lot of builtin functionality such as running a validation on your template to scan for possible security vulnerabilities and similar. You can also easily delete your infra using the same template as you create it. FWIW, I've gone all the way with templates and do as much as I can with templates and in smaller units. It makes it easy to move out a part of the infra or duplicate it to another project, using a pipeline in GitLab to deploy it for example.
The reason to use templates over API calls is that templates can be used in use cases where a deterministic outcome is required.
Both Template and API call has its own benefits. There is always a tradeoff between the two options. If you want more flexibility in the deployment, then the API call suits you better. On the other hand, if the security and complete revision is your priority, then Template should be your choice. Details can be found in this online documentation.
When using a template, orchestration of the deployment is handled by the platform. When using API calls (or other imperative approaches) you need to handle orchestration.