Hashicorp Vault - should you store customer-provided secrets in kv? - hashicorp-vault

I need to store very sensitive secrets that a user provides me with (username+password+certificate for authenticating into a 3rd party API).
I was first considering AWS Secrets Manager, which is very expensive and IMHO mainly for infrastructure secrets (database passwords, API keys, ...) and not customer-provided secrets. Now I'm deciding between using AWS KMS (and storing the secrets encrypted in a database (AWS RDS) using envelope encryption) and Hashicorp Vault.
https://www.vaultproject.io/docs/secrets/transit
https://www.vaultproject.io/docs/secrets/kv/kv-v2
From what I've read, I've come to the conclusion, that Vault KV is mainly for infrastructure secrets and Vault Transit might be somewhat equivalent to AWS KMS (as in, better for customer-provided secrets).
Since I'm building a very small application, if I decide to use the Vault KV, I won't need a database at all. But I'm not sure if Vault KV is the right fit.
Is there some limitation or a possible problem (for this use-case) with Vault KV I should be aware of?
Thanks

About KMS
AWS KMS is really only to manage the main master key. Parameter store will use KMS under the hood to manage its encryption keys. And if you deploy Vault in AWS, you’ll probably use KMS too to unseal and as the master key. You probably don’t want to use KMS directly (because the other solutions give you per secrets/path policies, secret versioning, audit logs, all features you probably want/need that KMS won’t give you directly)
Vault KV secrets engine versus Vault transit secrets engine
Vault key-value secrets engine let you store the secret, and Vault manages the encryption, audit logs, accesses (and versions if you use KV v2)
The transit secrets engine can be seen as "encryption as a service":
you call it to create a keyring (think about it as a data encryption key, with rotation mechanisms built in, hence the keyring.)
then you can it with a keyring reference to either encrypt some stuff, and get some encrypted cipher text back that you can store in a database, or in a file. Or you can do the reverse, and call it with a keyring reference, some cipher text, and ask it to decrypt this and get the data back (assuming you have the correct policies)
or, if the data you want to encrypt/decrypt is "large" (depending on your use case), you can use it to get a data key you can use to locally encrypt/decrypt your data. (you get the key, encrypt or decrypt it, but you don’t have to deal with the encryption key security: Vault manages that for you, so you can just clean it from memory, and get it back from Vault the next time)
Should you use Vault or AWS SSM Parameter Store?
Like so many things: it depends. It depends on your criteria. I do love both, but for different use cases, so let me list the main differences I see between both, and hopefully, it will give you enough context to make your choice:
Managed or not? : AWS SSM Parameter Store is fully managed, and cheap, so this is a burden you don’t have to think about. If Parameter Store fills your needs, go with it, it gives you some precious time back to work on other things.
Access management: Vault comes with a lot of authentication options, and easy to reason about policies. If IAM policies are enough to cover all your use cases to grant minimal access to these secrets, Parameter Store is a good option. Otherwise Vault will get you covered.
Don’t forget Vault provides a lot of other secret/encryption tools. Chances are they can benefit your project (or not, but check this)
My rule of thumb would be: if AWS IAM is enough, and you don’t have any other needs than simple secret storage, SSM Parameter Store sounds like a good idea.
If you have other encryption needs, or some other authentication/policy requirements that would make it more challenging to build on top of IAM< Vault will shine.
And if you have a lot of secrets to store/encrypt/decrypt, Vault’s transit secret engine to encrypt/decrypt the data, and you regular DB to store these encrypted blob will work perfectly.

Curious why your are avoiding AWS SSM. May be a little to limiting if you have a lot of data to encrypt and store but just curious.
It seems like if you want to avoid cost of Secrets Manager, than an encrypted RDS database isn't gaining you too much.
AWS Secrets Manager seem pretty reasonable unless you have a VERY high volume.

Related

data-at-rest encryption for NoSQL

Prototyping a project with Mongo & Spring Boot and thinking it does a lot of what I want. However, I really need to have encrypted data-at-rest, which would seem to indicate I have to purchase the enterprise version. Since I don't have a budget yet, I am wondering if there is another alternative that people have found useful? I think DynamoDB can be used in a local & test environment. Or it viable to encrypt the data at the application level and still have great performance for my CRUD operations?
I've done application level encryption with DynamoDB before with some success. My issues where not really with DynamoDB but with the encryption in the application.
First, encryption/decryption is very expensive. I had to increase the number of servers I was using by over double just to handle the extra CPU load. Your milage may very. In my case, I was using Node.js and the servers suddenly switched from being I/O bound to being CPU bound.
Second, doing encryption/decryption application side adds a lot of complexity to your app. You will almost certainly need to parallelize the encryption/decryption to minimize the added latency that it will cause. Also, you will need to figure out a secure way of sharing the keys.
Last, application level encryption will make some DynamoDB operations unavailable to you. For example, conditions probably won't make sense anymore for encrypted values.
Long story short, I wouldn't recommend application level encryption regardless of the database.
DynamoDB now supports what they call Server-Side Encryption at Rest. Personally I think that name is a little confusing but from their perspective, your application is the client and DynamoDB is the server.
Amazon DynamoDB encryption at rest helps you secure your application
data in Amazon DynamoDB tables further using AWS-managed encryption
keys stored in AWS Key Management Service (KMS). Encryption at rest is
fully transparent to the user with all DynamoDB queries working
seamlessly on encrypted data. With this new capability, it has never
been easier to use DynamoDB for security-sensitive applications with
strict encryption compliance and regulatory requirements.
Blog post about DynamoDB encryption at rest
You simply enable encryption when you create a new table and DynamoDB
takes care of the rest. Your data (tables, local secondary indexes,
and global secondary indexes) will be encrypted using AES-256 and a
service-default AWS Key Management Service (KMS) key. The encryption
adds no storage overhead and is completely transparent; you can
insert, query, scan, and delete items as before. The team did not
observe any changes in latency after enabling encryption and running
several different workloads on an encrypted DynamoDB table.

Can I read Vaults secrets?

I understand the whole idea behind Hashi Vault is to store secrets securely. But for debugging purposes, is there a way to view or print the dynamic secrets generated by the transit or AWS secrets engines and others ?
This question was asked a while ago, but:
In general, it depends, but a likely no; the vault is designed to be very restrictive with its secrets, but implementation will vary based on engine
To the two engines you mentioned,
The transit engine does support exportable keys so that maybe one way to get the desired output. But you would also need the data to decrypt.
The AWS requires configuration to connect to AWS, so it may be possible to Query the config to the info. However, I am not sure as I do not make use of the AWS secret engine much.
Both these methods will need access with significant "sudo" permissions or root vault user.
Finally, if you have enough of the unseal keys, access to Vault's back end, and an understanding of vaults inner workings, you could directly look at vaults storage.

SCM: Storing Credentials

It is generally recommended not to store credentials in a repository. The question is, where should they be stored then, so all developers have access to the same configuration?
The question is subjective - different practices may be applied. For me, the approach that worked best is utilisation of some form of "Single Sign-On" where possible and provision of personal logins to every system to developers. This also has an advantage of being able to find out who was responsible for a destructive action (which sometimes happens).
You can also take the approach as described here: store the credentials in the SCM, but in encrypted form. This will allow to maintain versioning, yet not allow access "for everyone". I'd say, best option is to combine these two approaches (and store only developer-environment "service" credentials - encrypted - in the SCM)
I stored the config files in a private S3 bucket and manage access via IAM. The configuration updates and revisions are handled by a small script using the AWS gem. That way anybody with sufficient privileges can access them, and we also can issue access credentials for each developer separately.

Blob Storage Server with REST API

I am looking for a solution similar to Amazon S3 or Azure Blob Storage that can be hosted internally instead of remotely. I don't necessarily need to scale out, but I'd like to create a central location where my growing stable of apps can take advantage of file storage. I would also like to formalize file access. Does anybody know of anything like the two services I mentioned above?
I could write this myself, but if something exists then I'd rather now reinvent the wheel, unless that weel has corners :)
The only real alternative to services like S3 and Azure blobs I've seen is Swift, though if you don't plan to scale out this may be overkill for your specific scenario.
The OpenStack Object Store project, known as Swift, offers cloud storage software so that you can store and retrieve lots of data in virtual containers. It's based on the Cloud Files offering from Rackspace.
The OpenStack Object Storage API is implemented as a set of ReSTful (Representational State Transfer) web services. All authentication and container/object operations can be performed with standard HTTP calls
http://docs.openstack.org/developer/swift/

Best Practices for REST Shared Secret Value

I am using a REST API that uses oauth for authentication. When registering for the service I was given my API Consumer Key and my API Shared Secret. I've been simply hardcoding the Shared Secret into my Application code and compiling it.
Is this the best way to manage a Shared Secret? That is, are there any security implications?
Should this be encrypted in some way? What are the best practices for managing this Shared Secret?
It depends a bit on where your code is running.
In your case a hacker would need to steal your dll, and read the key from the dll.
This is better than storing the key in a configuration file in plain text.
You could store the key ecrypted in a database, with the information about how to decrypt it in your dll. That way a hacker would have to both steal your dll and information from your database.