I understand the whole idea behind Hashi Vault is to store secrets securely. But for debugging purposes, is there a way to view or print the dynamic secrets generated by the transit or AWS secrets engines and others ?
This question was asked a while ago, but:
In general, it depends, but a likely no; the vault is designed to be very restrictive with its secrets, but implementation will vary based on engine
To the two engines you mentioned,
The transit engine does support exportable keys so that maybe one way to get the desired output. But you would also need the data to decrypt.
The AWS requires configuration to connect to AWS, so it may be possible to Query the config to the info. However, I am not sure as I do not make use of the AWS secret engine much.
Both these methods will need access with significant "sudo" permissions or root vault user.
Finally, if you have enough of the unseal keys, access to Vault's back end, and an understanding of vaults inner workings, you could directly look at vaults storage.
Related
I need to store very sensitive secrets that a user provides me with (username+password+certificate for authenticating into a 3rd party API).
I was first considering AWS Secrets Manager, which is very expensive and IMHO mainly for infrastructure secrets (database passwords, API keys, ...) and not customer-provided secrets. Now I'm deciding between using AWS KMS (and storing the secrets encrypted in a database (AWS RDS) using envelope encryption) and Hashicorp Vault.
https://www.vaultproject.io/docs/secrets/transit
https://www.vaultproject.io/docs/secrets/kv/kv-v2
From what I've read, I've come to the conclusion, that Vault KV is mainly for infrastructure secrets and Vault Transit might be somewhat equivalent to AWS KMS (as in, better for customer-provided secrets).
Since I'm building a very small application, if I decide to use the Vault KV, I won't need a database at all. But I'm not sure if Vault KV is the right fit.
Is there some limitation or a possible problem (for this use-case) with Vault KV I should be aware of?
Thanks
About KMS
AWS KMS is really only to manage the main master key. Parameter store will use KMS under the hood to manage its encryption keys. And if you deploy Vault in AWS, you’ll probably use KMS too to unseal and as the master key. You probably don’t want to use KMS directly (because the other solutions give you per secrets/path policies, secret versioning, audit logs, all features you probably want/need that KMS won’t give you directly)
Vault KV secrets engine versus Vault transit secrets engine
Vault key-value secrets engine let you store the secret, and Vault manages the encryption, audit logs, accesses (and versions if you use KV v2)
The transit secrets engine can be seen as "encryption as a service":
you call it to create a keyring (think about it as a data encryption key, with rotation mechanisms built in, hence the keyring.)
then you can it with a keyring reference to either encrypt some stuff, and get some encrypted cipher text back that you can store in a database, or in a file. Or you can do the reverse, and call it with a keyring reference, some cipher text, and ask it to decrypt this and get the data back (assuming you have the correct policies)
or, if the data you want to encrypt/decrypt is "large" (depending on your use case), you can use it to get a data key you can use to locally encrypt/decrypt your data. (you get the key, encrypt or decrypt it, but you don’t have to deal with the encryption key security: Vault manages that for you, so you can just clean it from memory, and get it back from Vault the next time)
Should you use Vault or AWS SSM Parameter Store?
Like so many things: it depends. It depends on your criteria. I do love both, but for different use cases, so let me list the main differences I see between both, and hopefully, it will give you enough context to make your choice:
Managed or not? : AWS SSM Parameter Store is fully managed, and cheap, so this is a burden you don’t have to think about. If Parameter Store fills your needs, go with it, it gives you some precious time back to work on other things.
Access management: Vault comes with a lot of authentication options, and easy to reason about policies. If IAM policies are enough to cover all your use cases to grant minimal access to these secrets, Parameter Store is a good option. Otherwise Vault will get you covered.
Don’t forget Vault provides a lot of other secret/encryption tools. Chances are they can benefit your project (or not, but check this)
My rule of thumb would be: if AWS IAM is enough, and you don’t have any other needs than simple secret storage, SSM Parameter Store sounds like a good idea.
If you have other encryption needs, or some other authentication/policy requirements that would make it more challenging to build on top of IAM< Vault will shine.
And if you have a lot of secrets to store/encrypt/decrypt, Vault’s transit secret engine to encrypt/decrypt the data, and you regular DB to store these encrypted blob will work perfectly.
Curious why your are avoiding AWS SSM. May be a little to limiting if you have a lot of data to encrypt and store but just curious.
It seems like if you want to avoid cost of Secrets Manager, than an encrypted RDS database isn't gaining you too much.
AWS Secrets Manager seem pretty reasonable unless you have a VERY high volume.
I want to give a service account read-only access to every bucket in my project. What is the best practice for doing this?
The answers here suggest one of:
creating a custom IAM policy
assigning the Legacy Bucket Viewer role on each bucket
using ACLs to allow bucket.get access
None of these seem ideal to me because:
Giving read-only access seems too common a need to require a custom policy
Putting "Legacy" in the name makes it seem like this permission will be retired relatively soon and any new buckets will require modification
Google recommends IAM over ACL and any new buckets will require modification
Is there some way to avoid the bucket.get requirement and still access objects in the bucket? Or is there another method for providing access that I don't know about?
The closest pre-built role is Object Viewer. This allows listing and reading objects. It doesn't include storage.buckets.get permission, but this is not commonly needed - messing with bucket metadata is really an administrative function. It also doesn't include storage.buckets.list which is a bit more commonly needed but is still not part of normal usage patterns for GCS - generally when designing an app you have a fixed number of buckets for specific purposes, so listing is not useful.
If you really do want to give a service account bucket list and get permission, you will have to create a custom role on the project. This is pretty easy, you can do it with:
gcloud iam roles create StorageViewerLister --project=$YOUR_POJECT --permissions=storage.objects.get,storage.objects.list,storage.buckets.get,storage.buckets.list
gcloud projects add-iam-policy-binding $YOUR_PROJECT --member=$YOUR_SERVICE_ACCOUNT --role=StorageViewerLister
It is generally recommended not to store credentials in a repository. The question is, where should they be stored then, so all developers have access to the same configuration?
The question is subjective - different practices may be applied. For me, the approach that worked best is utilisation of some form of "Single Sign-On" where possible and provision of personal logins to every system to developers. This also has an advantage of being able to find out who was responsible for a destructive action (which sometimes happens).
You can also take the approach as described here: store the credentials in the SCM, but in encrypted form. This will allow to maintain versioning, yet not allow access "for everyone". I'd say, best option is to combine these two approaches (and store only developer-environment "service" credentials - encrypted - in the SCM)
I stored the config files in a private S3 bucket and manage access via IAM. The configuration updates and revisions are handled by a small script using the AWS gem. That way anybody with sufficient privileges can access them, and we also can issue access credentials for each developer separately.
I am looking for a solution similar to Amazon S3 or Azure Blob Storage that can be hosted internally instead of remotely. I don't necessarily need to scale out, but I'd like to create a central location where my growing stable of apps can take advantage of file storage. I would also like to formalize file access. Does anybody know of anything like the two services I mentioned above?
I could write this myself, but if something exists then I'd rather now reinvent the wheel, unless that weel has corners :)
The only real alternative to services like S3 and Azure blobs I've seen is Swift, though if you don't plan to scale out this may be overkill for your specific scenario.
The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data in virtual containers. It's based on the Cloud Files offering from Rackspace.
The OpenStack Object Storage API is implemented as a set of ReSTful (Representational State Transfer) web services. All authentication and container/object operations can be performed with standard HTTP calls
http://docs.openstack.org/developer/swift/
I've written a Perl CLI program that uses the WWW::Netflix::API module. It's finished and I'd like to release it but w/o exposing my consumer_secret key. Any ideas how this can be done?
I think you have two options:
Make the end users obtain their own Netflix key.
Proxy all the traffic through your own server and keep your secret key on your server.
You could keep casual users away from your secret key while still distributing it with some obfuscation but you won't keep it a secret from anyone with a modicum of skill.
Proxying all the traffic would pretty much mean setting up your own web service that mimics the parts of the Netflix API that you're using. If you're only using a small slice of the Netflix API then this could be pretty easy. However, you'd have to carefully check the Netflix terms of use to make sure you're playing by the rules.
I think you'd be better off making people get their own keys and then setting up your tool to read the keys from a configuration file of some sort.