Doorkeeper oauth_access_tokens too large - doorkeeper

We are using the Doorkeeper gem for OAuth2. However, our oauth_access_tokens table is getting way too large.
Are old access tokens supposed to be removed automatically? If so, why are they not removed, and is this perhaps a setting?
If they are not removed automatically, what would be a good strategy to regularly purge old tokens?

For mates who are googling for answer in 2019.
Add this line to your Rakefile:
Doorkeeper::Rake.load_tasks
It will add some tasks to your environment:
rake doorkeeper:db:cleanup # Removes stale data from doorkeeper related database tables
rake doorkeeper:db:cleanup:expired_grants # Removes expired (TTL passed) access grants
rake doorkeeper:db:cleanup:expired_tokens # Removes expired (TTL passed) access tokens
rake doorkeeper:db:cleanup:revoked_grants # Removes stale access grants
rake doorkeeper:db:cleanup:revoked_tokens # Removes stale access tokens
Documentation can found here

Are old access tokens supposed to be removed automatically? If so, why are they not removed, and is this perhaps a setting?
As I understand, it will not be removed automatically
If they are not removed automatically, what would be a good strategy to regularly purge old tokens?
In my case, we used cron job to regularly check the database.
We delete the tokens which are already revoked and the tokens that is not yet revoked, but it has been more than some months (Anyway, this is just an example).
You can also change the lifespan of the token longer so it will produce less number of tokens, but you know that it is not recommended as security reason.
Hope it can help.

Related

Rundeck - Where do I update rundeck auth token after generating new token?

I am using rundeck rd cli to invoke other jobs from one job. The token was expired but even after updating the user tokens (with no expiration), I can see the same error.
Is there any config file i need to update the token in?
auth-image
you said the property for no expiration tokens is already set, right?
Did you configure it like this?
rundeck.api.tokens.duration.max=0
You can restart Rundeck and try again.
I had exported the RD_TOKEN. Replaced it with a new token and it worked.

Kubernetes service account tokens

Does anyone know if this change https://github.com/kubernetes/kubernetes/pull/72179 will affect the service account tokens and their ability not to expire? Right now we have such tokens in our CI/CD and we rely that these will not expire.
According to this
This only changes the source of the credentials used by the controller loops started by the kube-controller-manager process. Existing use of tokens retrieved from secrets is not affected.

How to use Hashicorp Vault's AppRole in production?

We have installed and configured Hashicorp Vault AppRole authentication for one server, by storing the role_id and secret_id in a local file on the server, and we're able to have code on the server read the values from file, authenticate to Vault, receive a token and then read the secrets it needs from Vault. So far so good. However, the secret_id expires after 31 days, and so the process fails.
I've read up on the concepts of using AppRoles, and they seem like the perfect fit for our use case, but for this expiration. We don't want to have to re-generate the secret_id every month.
From what I've read, if you create the role without setting secret_id_ttl it should be non-expiring, but that isn't the case. This may be due to how the AppRole auth method is configured, but I haven't seen anything solid on this.
So I found an article on the Hashicorp website where AppRoles are discussed in detail. The article gives good arguments for expiring secret_id's in a CI/CD environment, even illustrating how this works in 8 simple steps. I understand how this works, but the article fails to mention how the CI/CD and Orchestrator systems themselves are authenticated to Vault? Or am I missing something?
In the end, I want to have the secret_id not expire. Ever.
Without additional support from your environment you will have to write some logic in your installer, and have a service manager of some sort to start your services. In many cloud environments, you may already have the equivalent entities (Terraform, Cloud Formation, etc.) and you should leverage their secrets management capabilities where needed.
For custom installations, here is a workflow that I have used.
Have an installation manager process that can be invoked to perform installation / upgrade. Make sure installation / upgrade of services is always through this process.
Have a service manager process that is responsible for starting individual services and monitoring them / restarting them. Make sure service start-ups are always via this service manager.
During installation, generate self-signed certificates for Vault, installation manager and service manager. Vault certificates should trust the certs for the installation manager and the service manager. Store these with limited permission (600) in directories owned by the installation user or the service manager user as the case may be. Set up certificate-based authentication in Vault using these certs.
These credentials should have limited capabilities associated with them. The installation manager should only be able to create new roles and not delete anything. The service manager should only be able to create secrets for the named roles created by the installation manager, and delete nothing.
During installation / upgrade, the installation manager should connect to Vault and create all necessary service-specific roles. It should also be able to set role ids for individual services in per-service config files that the services may read on start-up.
During each service's start-up, the service manager should connect to Vault and create secret ids corresponding to each service's role. It should set the secret id in an environment variable and start the service. The secret id should have time-bound validity (by setting TTLs) so that they cannot be used for much beyond the creation of the auth token (see #7).
Each service should read the role id from the config file, and the secret id from the environment variable. It should then generate the auth token using these two, and use the token to authenticate itself with vault for its lifetime.
It is possible to create a Vault AppRole with a secret_id that essentially never expires. However, this should be limited to use on a Vault development server -- one that does not contain any production credentials -- and for use in a development environment.
That being said, here's the procedure I used based on several articles in the Vault documentation, but primarily AppRole Pull Authentication.
This assumes that the Vault approle authentication method is already installed at approle/ and that you are logged in to Vault, have root or admin privileges on the Vault server and have a valid, non-expired token.
Note: For the values supplied for the fields below, the maximum value that vault seems to accept is 999,999,999. For the TTL fields, that is the number of seconds which comes out to more than 31 years. That's not forever, but it is long enough that renewing the secret_id will probably be somebody else's problem (SEP).
# Vault server address to be used by the Vault CLI.
export VAULT_ADDR="https://vault-dev.example.com:8200/"
# Vault namespace to be used by the CLI.
# Required for Cloud and Enterprise editions
# Not applicable for Open Source edition
export VAULT_NAMESPACE="admin"
# The name of the Vault AppRole
export VAULT_ROLE=my-approle
# Override defaults on the approle authentication method
# NOTE: In this command, the field names, default-lease-ttl
# and max-lease-ttl contain dashes ('-'), NOT
# underscores ('_'), and are preceded by a single
# dash ('-').
vault auth tune \
-default-lease-ttl=999999999 \
-max-lease-ttl=999999999 approle/
# Override defaults on the approle
# NOTE: In this command, the field names, secret_id_ttl and
# secret_id_num contain underscores ('_'), NOT
# dashes ('-'), and are NOT preceded by a single
# dash ('-').
vault write auth/approle/role/my-approle \
secret_id_ttl=999999999 \
secret_id_num_uses=999999999
# Create a new secret_id for the approle which uses the new defaults
vault write -f auth/approle/role/my-approle/secret-id
Update the server config file to use the new secret_id and you are ready to go.
As the OP has noted, the Hashicorp Vault documentation assumes that the application is able to authenticate, somehow, to the vault and then retrieve the secret ID (possibly wrapped) from the vault and then, use that to authenticate and fetch a token used to actually work with secrets. The answers here are posing alternative approaches to retrieving that initial token.
Alan Thatcher wrote a blog article, Vault AppRole Authentication, that provides another well thought out approach:
Create a policy that allows the user to retrieve the secret-id and role-id, but nothing else.
Create a long lived, periodic/renewable token based on that policy.
Store the long lived token securely, e.g. as a Kubernetes secret
At runtime, use the long-lived token to:
acquire the secret-id and role-id,
authenticate to vault using these and acquire short-lived token
use current short-lived token to work with secrets
For Java applications, the Spring Vault project supports this approach if you configure the long-lived token as the "initial token" and the approle authencation name, e.g. chef-ro in the blog case.
My personal feeling is that this approach is about as secure but a bit simpler than the mutual TLS approach. I agree that using an infinite TTL for the secret-id is a less secure practice for Production environments.
Thanks to Mr. Thatcher for thinking this one through.
This is probably not the canonnical answer, but I found it empty so decided to add some pointers.
As per Hashicorp Vault AppRole: role-id and secret-id:
Additional brownie information: Ideally, it's best practice to keep
the TTL low, 30 minutes max - if your application is stateful, or
maybe even less if it's a stateless application. The secret key of
Vault approle should also be rotated every 90 days. Please note by
default, Vault approle backend has 31 days of TTL, so if you want to
set it to 90 days, you need to increase TTL of the approle backend as
well.
However (in the same question):
You can generate secret-id with indefinite validity. But doing so will
be as good as keeping your secrets in the configuration file.
For ephemeral instances you can use configuration management to pass in secrets via a third (broker) role. With regard to a server that exists indefinitely, i'm still working that out...
Ideas:
TLS certificates might work well on Windows, don't know about Linux.
GitHub Personal Access Tokens, but this is not org. friendly.
Review the other auth methods available to see if there's one that fits your requirements (e.g. AWS).

Where should I store Vault's Unseal Key and Root Token?

Where should I store the Unseal Key and Root Token for HashiCorp Vault?
The Vault will be used by various members on the team.
In best-practice usage, you wouldn't be storing a root token - once done with it, it should be revoked.
Root tokens are useful in development but should be extremely
carefully guarded in production.
In fact, the Vault team recommends
that root tokens are only used for just enough initial setup (usually,
setting up auth methods and policies necessary to allow administrators
to acquire more limited tokens) or in emergencies, and are revoked
immediately after they are no longer needed.
If a new root token is
needed, the operator generate-root command and associated API endpoint
can be used to generate one on-the-fly.
Unseal keys should be distributed amongst trusted people, with nobody having access to more than one of them.
This then requires more than one person to restart vault or to gain root access to it.
The documentation doesn't suggest any good hiding places for the individual unseal keys that I could find - I'd suggest wherever you normally store passwords, ie a password manager.
For day-to-day usage, users can log in using the user/pass or ldap auth backends.

How to revoke signed certificate in Kubernetes cluster?

kube-apiserver does not seem to provide an option to use a certification revocation list (CRL).
Is there a way to revoke a client certificate if it's lost or not used anymore?
As far as I know there isn't a way to directly revoke certificates via a CRL. However, what does work, and what we are currently using, is ABAC policies to identify users (set via the Common Name of a certificate), and whether they have access to a given resource on Kubernetes.
As an example, say you have a user called "random". You would generate a Client Certificate for them from your given Certificate Authority, with a Common Name of "random".
From there, you can have an ABAC policy file (a csv file with each line being a bit of JSON), with permissions set for user "random" that would provide them with a certain level of access to the Kubernetes API. You can have them have access to everything or certain namespaces or other API parameters. If you need to revoke permissions, you simply delete that user from the ABAC policy file. We've tested this, and it works well. The unfortunate thing, I will say, is you have to restart the Kubernetes API service for those changes to take effect, so there may be a few seconds of downtime for this change to occur. Obviously in a development environment this isn't a big deal, but on production you may need to schedule time for users to be added.
Hopefully in the future a simple "kube-apiserver reload" will allow for a re-read of that ABAC policy file.
One final thing to note: when using Client Certificates for ABAC authentication, you will need to set permissions for users INDIVIDUALLY. Unlike with auth tokens with ABAC, you cannot set Client Certificate users in "groups." Something that caused us headaches, so figured it was worth passing on. :)
Hope this helps!