Can I use Vault like an Amazon KMS service? - hashicorp-vault

I am looking for the system that allows to create and store symmetric master keys in a safe manner. One of such systems is Amazon KMS, where I can create master private key per user and use it to encrypt some data (e.g. user's private keys).
But I need to support several platforms and so I have a question about Vault project (https://www.vaultproject.io). Is it appropriate tool for this task ?
I have found that Vault supports authorization functionality ( https://www.vaultproject.io/docs/auth/userpass.html) and I am wondering is it okay to use this API intensively and store 50k users or so ?
Said that, it looks like these services solve different problems, and Vault is not supposed to be used like Amazon KMS service. But I need to discuss this idea with someone in order to be completely sure.
Many thanks!

You may look into Cubbyhole backend for Vault. This backend works like a unique space for each token. Destroying the access token deletes all the data stored in its cubbyhole space.
From Cubbyhole authentication principles:
The cubbyhole backend is a simple filesystem abstraction similar to the generic backend (which is mounted by default at secret/) with one important twist: the entire filesystem is scoped to a single token and is completely inaccessible to any other token.
In other words, it does not matter, what policies attached to the token, but matter what the token is themselves. And only a single token can be used to set or retrieve values in its cubbyhole.

Related

Firebase RealTime Database Security Rules for personal app

I have created an application in Flutter and I will be the only one to use it since it makes my work easier. It uses Firebase Realtime Database to synchronize data between my devices. When I read the Firebase documentation, I realized I needed to protect my database to prevent access from strangers, so I looked for a way to import some kind of password to pass as a payload when requesting and writing data. But there doesn't seem to be anything like that, I would have to implement Firebase Auth to do that as well. So I opted to create my own dataset with a very particular name, and set the read and write rules only to that particular path. My rules look like this:
{
"rules": {
"dataset-verylong32charstringwithalphanumericvalue":{
".read": "true",
".write": "true",
}
}
}
So in theory any other access attempts should be blocked. Since this is a bit of an odd method and not described in the documentation. Can I consider this method safe?
Obviously I know that if some malicious person gets wind of my string they will have full access to my data, but since the chances are low of that happening, I just needed superficial protection against abuse of the service
I have tried making REST requests and all attempts seem to be blocked.
So I expect it to be secure. However, I fear there may be a method to map all the paths in my database and then easily derive my string
What you're using is known as a shared secret. If it meets your needs then that's a valid way of securing access to the data. There is no way through the client-side SDKs and API to read the root of the database, and thus to learn your secret path that way.
For example, download URLs generated by Firebase for Cloud Storage depend on a shared secret to make files publicly readable by everyone who has that secret (it's the token parameter in the URL).
I also used this approach myself when dealing with the data for an events web site. The site was statically regenerated when the data in the database changed, so the shared secret never ended up in the published site.
The problem you need to figure out is how you're going to get the shared secret to the consumers of the data. In the above examples we expect either everyone to at possibly get the secret, but then not do any hard (since download URLs are read-only); or we expect only trusted services to know the secret and it never to reach anyone else. If your use-case is different than these, finding a way to share the secret out of band may become your next problem to solve.
A simple alternative is to implement anonymous authentication, which allows the app to sign in without requiring any credentials from the user and with a single line of code. With that you can then restrict access to the data to just the UID(s) that you know in your database's security rules. I usually hard-code the UID in the rules, until I get tired of adding/updating them, at which point I switch over to storing an allow-list of them in the database itself.
This is not a safe or efficient method.
If you are the only one who will use this app and can create user accounts, then you could just check if the user is authenticated or not, e.g.
service cloud.firestore {
match /databases/{database}/documents {
allow read, write, update, delete: if request.auth != null;
}
}
If you need to add another layer of security, then you can add custom user claims to the authentication token: https://firebase.google.com/docs/auth/admin/custom-claims
You can then securely access those extra claim fields in the rules (e.g. "request.auth.customField"), and compare them to some other data in your database.
Adding the claims is typically done on a secure backend to keep the Firebase admin details private - but if you are the only user on the frontend app(s), it shouldn't be a security concern to do it on the front end too.

Why `ServiceAccount` exists, but there is no such entity for the regular human user?

When managing entities semantically connected with Kubernetes, it makes sense to let Kubernetes manage them. Kubernetes manages ServiceAccount as a resource kind, but does not have similar kinds for human users or groups.
I'm wondering what is the design decision behind this. I know that ServiceAccount gets a token generated on creation that is used for further access, and for accessing a cluster as a human user, you need a config file with a public key signed with Kubernetes's CA.
It seems inconsistent, why not create similar resource kinds, say, HumanAccount or Account and supply a public key on creation (public key is kind of safe to store even in plain test in a yaml manifest), so that Kubernetes signs it and from that point on, accepts connections from the user?
I know that Certificate Signing Requests need to be approved by the administrator. Maybe that is the reason?
Any hints or insights appreciated!
There actually are User entities in RBAC, but kubernetes delegates the instantiation of those to either the CN field of mTLS authentication, or the sub claim of OIDC, or in a very customized situation one can provide that via HTTP headers

JWT Authorisation through multiple microservices

The question in short
What would be the correct way to handle JWT authentication for a user
who is accessing data from microservice A but where microservice A requires data from Microservice B.
The Setup
I am using Auth0 to issue and handle authentication and authorisation.
I have two microservices set up. The user is logged in on the front end and needs to load tenant-based accounting information.
Microservice A is an aggregate service that communicates to multiple different services in order to provide a standardized response for different types of data types.
Microservice B is queried by Microservice A in order to retrieve vehicle information owned by the user.
Which solution would be the correct approach?
Solution A:
The token that was issued to the user when he logged in will be used by MS A to communicate with MS B. MS A is essentially forwarding the token provided by the user.
Solution B:
MS A has its own JWT which has super admin rights to access any resource from MS B. MS A will use this token when accessing resources from MS B and MS A will take responsibility for ensuring that resources not accessible to the user are not returned in the aggregated dataset.
In solution B you put a lot of responsibility on MS A to properly handle the data. In my opinion it's just asking for trouble, eventually someone will manage to call MS A in such a way to get some more data than they can access. So I would go with solution A, or a variant of it. There are a couple ways how tokens can be shared between services - passing the same token, embedding tokens or exchanging tokens. I have written an article about Token Sharing a while back, you can have a look at it.
I think it depends - both options are OK depending on the scenario.
Here are a few things to consider.
Does microservice B do any resource owner authorization e.g. using the sub claim from the users JWT? If so it might be easier to just pass through the user access token. Note: generally I only use data inferred from the JWT for authZ logic. If you need something like the resource owner ID as part of the business logic, you should explicitly include this in the request payload, so that the service can also be consumed by back-end services using M2M tokens.
With Auth0, there's a cost for machine MUAs (monthly unique authentications). This is separate to the cost for user MUAs. For an enterprise account the default is 1000 M2M MUAs. Increasing this number is pretty cheap, but still something to consider. In general you should configure your M2M tokens to have a longer expiration (e.g. 1-2 days) and use token caching. Generally an in-memory cache will suffice, but if you're building serverless apps (e.g. AWS Lambda, Azure Function Apps, etc) you'll probably need a distributed cache. All this is extra work, complexity, and potentially cost (e.g. for distributed cache) for M2M tokens.
There's additional network request (to Auth0) for M2M authentications. This is mitigated through the use of token caching, but something to consider as well.
Is microservice B going to be publicly accessible? If yes, you may have some security considerations as users will be able to call microservice B directly with their access tokens (obtained legitimately), but they might be able to tamper with requests in ways you weren't intending.
The company that I work for use a mix of the 2 approaches you mentioned (which generally depend on the scenario). We have some helper utils for things like token caching etc., which make both approaches easy enough, but if you're going to use M2M tokens you'll probably have a bit of additional work up-front.
One last thing to mention; when implementing 'transitive' services like microservice B, I think it's good to ensure the service is consumable by both user and machine tokens. You're authZ policies might allow for one or the other (or both), but at least you'll have the flexibility to change if/when needed.

Credentials in Streamsets

In my current project I'm working with StreamSets and I would like to use Hashicorp Vault as my credentials store, however I'm not able to use credential:get() function wherever I want to. E.g. in Shared Access Key in Azure IoT Hub Producer block. I know that I could use Runtime Properties but I don't think it solves my problem.
Am I missing something or I can use credential:get() only in fields marked with a key icon?
You can only use credential:get() in fields marked with a key icon. This is by design, to minimize the chance of leaking credentials. For example, if credential:get() was allowed in URL parameters, a pipeline designer could send a request to a web server under their control to discover the credential. It may make sense to allow Shared Access Key to receive credentials. Please file an issue at https://issues.streamsets.com with your enhancement request.

Node - request authentication using secret

Lets assume i have REST api with products and i want it to be accessible only for specified users.
The list of users is known for me and i'm looking for some way to give them safe access to my API.
I don't want to create authentication with username and password, generate token and this stuff.
What i can imagine is that i'm giving each of my users some secret string
and they use it in every request.
And what i need is some example/tutorial/name of this kind of solution,
i'm sure there are some standards for that but i don't know it.
I know it's kind of nooby question - sorry for that, i'm just asking ;).
Thanks in advance!!
You are looking for a simple shared-secret authentication. A simple solution in this case is just to check for the secret as a param (or it could be in the request header). For example, the client can call:
https://example.com/valuable-stuff?secret=2Hard2Gue$$
You implement this in your web request handler as:
SECRET = '2Hard2Gue$$'
function showValuableStuff() {
if (params['secret'] != SECRET)
return NotFounderError;
// now retrieve and output the resource
}
Some practical considerations are:
Use a secure connection for this to prevent the secret being leaked (ie a secure HTTPS exposure).
Be careful where you store the source code if you're hard-coding it. A fancier solution is use an environment variable which is set on the server, so you keep this out of the source code. Or at least to encrypt the part of the source that contains the secret.
While this is fine for simple solutions, it violates the basic security principle of accountability because you are sharing the secret with multiple people. You might want to consider allocating each individual their own random string, in which case, you may as well use HTTP Basic Authentication as it's well supported by Apache and other web servers, and still quite a lightweight approach.