How can i rotate admin.conf in kubernetes? - kubernetes

I need to rotate admin.conf for a cluster so old users who used that as their kubeconfig wouldn't be allowed to perform actions anymore.
How can i do that?

This is a Community Wiki answer, posted for better visibility, so feel free to edit it and add any additional details you consider important.
As mdaniel wrote in his comment:
the answer to your question is "rekey the entire apiserver CA
hierarchy" or wait for admin.conf cert to expire, because those
admin.conf credentials are absolute. Next time, use the provided
oidc mechanism for user auth.
For kubeadm based kubernetes cluster please also refer to Certificate Management with kubeadm. For manual rotation of CA Certificates, please refer to this section. Pay special attention to point 7:
Update certificates for user accounts by replacing the content of client-certificate-data and client-key-data respectively.
For information about creating certificates for individual user
accounts, see Configure certificates for user accounts.
Additionally, update the certificate-authority-data section in the
kubeconfig files, respectively with Base64-encoded old and new
certificate authority data

Related

Installing SSL Certificates for Wazuh-Dashboard

Is it possible to have Wazuh Manager served through custom SSL certificates? The wazuh-certs-tool gives you a self cert, and every other way to get it served through SSL has failed.
The closest I've gotten to getting this to work is I've had the dashboard being served by a custom SSL, I had agents connecting to it successfully and providing a heartbeat, but had zero log flows or events happening. When I had it in this state, I saw the API calls were coming from what appeared to be a Java instance, erroring out complaining about receiving certificate. I saw a keystore file located at /etc/wazuh-indexer. Do I also need to add the root-ca cert here as well?
It seems that your indexer's excepted certificates do not match the certificates in your manager or the dashboard.
If you follow the normal installation guide, it shows how and where to place your certificates, that are created using the wazuh-cert-tool. But, certificates can be created from any other source, as long as they have the expected information, you can check that informationenter link description here here.
I would recommend you follow the installation steps in the installation guide, from scratch to make sure you copy each excepted certificate in it's place and that the configuration files for your indexer, dashboard, and manager take into account the correct files. All you would need to change, the creation of the certificates, to have your own custom certs.
In case of further doubt, do not hesitate to ask.

How to use Hashicorp Vault's AppRole in production?

We have installed and configured Hashicorp Vault AppRole authentication for one server, by storing the role_id and secret_id in a local file on the server, and we're able to have code on the server read the values from file, authenticate to Vault, receive a token and then read the secrets it needs from Vault. So far so good. However, the secret_id expires after 31 days, and so the process fails.
I've read up on the concepts of using AppRoles, and they seem like the perfect fit for our use case, but for this expiration. We don't want to have to re-generate the secret_id every month.
From what I've read, if you create the role without setting secret_id_ttl it should be non-expiring, but that isn't the case. This may be due to how the AppRole auth method is configured, but I haven't seen anything solid on this.
So I found an article on the Hashicorp website where AppRoles are discussed in detail. The article gives good arguments for expiring secret_id's in a CI/CD environment, even illustrating how this works in 8 simple steps. I understand how this works, but the article fails to mention how the CI/CD and Orchestrator systems themselves are authenticated to Vault? Or am I missing something?
In the end, I want to have the secret_id not expire. Ever.
Without additional support from your environment you will have to write some logic in your installer, and have a service manager of some sort to start your services. In many cloud environments, you may already have the equivalent entities (Terraform, Cloud Formation, etc.) and you should leverage their secrets management capabilities where needed.
For custom installations, here is a workflow that I have used.
Have an installation manager process that can be invoked to perform installation / upgrade. Make sure installation / upgrade of services is always through this process.
Have a service manager process that is responsible for starting individual services and monitoring them / restarting them. Make sure service start-ups are always via this service manager.
During installation, generate self-signed certificates for Vault, installation manager and service manager. Vault certificates should trust the certs for the installation manager and the service manager. Store these with limited permission (600) in directories owned by the installation user or the service manager user as the case may be. Set up certificate-based authentication in Vault using these certs.
These credentials should have limited capabilities associated with them. The installation manager should only be able to create new roles and not delete anything. The service manager should only be able to create secrets for the named roles created by the installation manager, and delete nothing.
During installation / upgrade, the installation manager should connect to Vault and create all necessary service-specific roles. It should also be able to set role ids for individual services in per-service config files that the services may read on start-up.
During each service's start-up, the service manager should connect to Vault and create secret ids corresponding to each service's role. It should set the secret id in an environment variable and start the service. The secret id should have time-bound validity (by setting TTLs) so that they cannot be used for much beyond the creation of the auth token (see #7).
Each service should read the role id from the config file, and the secret id from the environment variable. It should then generate the auth token using these two, and use the token to authenticate itself with vault for its lifetime.
It is possible to create a Vault AppRole with a secret_id that essentially never expires. However, this should be limited to use on a Vault development server -- one that does not contain any production credentials -- and for use in a development environment.
That being said, here's the procedure I used based on several articles in the Vault documentation, but primarily AppRole Pull Authentication.
This assumes that the Vault approle authentication method is already installed at approle/ and that you are logged in to Vault, have root or admin privileges on the Vault server and have a valid, non-expired token.
Note: For the values supplied for the fields below, the maximum value that vault seems to accept is 999,999,999. For the TTL fields, that is the number of seconds which comes out to more than 31 years. That's not forever, but it is long enough that renewing the secret_id will probably be somebody else's problem (SEP).
# Vault server address to be used by the Vault CLI.
export VAULT_ADDR="https://vault-dev.example.com:8200/"
# Vault namespace to be used by the CLI.
# Required for Cloud and Enterprise editions
# Not applicable for Open Source edition
export VAULT_NAMESPACE="admin"
# The name of the Vault AppRole
export VAULT_ROLE=my-approle
# Override defaults on the approle authentication method
# NOTE: In this command, the field names, default-lease-ttl
# and max-lease-ttl contain dashes ('-'), NOT
# underscores ('_'), and are preceded by a single
# dash ('-').
vault auth tune \
-default-lease-ttl=999999999 \
-max-lease-ttl=999999999 approle/
# Override defaults on the approle
# NOTE: In this command, the field names, secret_id_ttl and
# secret_id_num contain underscores ('_'), NOT
# dashes ('-'), and are NOT preceded by a single
# dash ('-').
vault write auth/approle/role/my-approle \
secret_id_ttl=999999999 \
secret_id_num_uses=999999999
# Create a new secret_id for the approle which uses the new defaults
vault write -f auth/approle/role/my-approle/secret-id
Update the server config file to use the new secret_id and you are ready to go.
As the OP has noted, the Hashicorp Vault documentation assumes that the application is able to authenticate, somehow, to the vault and then retrieve the secret ID (possibly wrapped) from the vault and then, use that to authenticate and fetch a token used to actually work with secrets. The answers here are posing alternative approaches to retrieving that initial token.
Alan Thatcher wrote a blog article, Vault AppRole Authentication, that provides another well thought out approach:
Create a policy that allows the user to retrieve the secret-id and role-id, but nothing else.
Create a long lived, periodic/renewable token based on that policy.
Store the long lived token securely, e.g. as a Kubernetes secret
At runtime, use the long-lived token to:
acquire the secret-id and role-id,
authenticate to vault using these and acquire short-lived token
use current short-lived token to work with secrets
For Java applications, the Spring Vault project supports this approach if you configure the long-lived token as the "initial token" and the approle authencation name, e.g. chef-ro in the blog case.
My personal feeling is that this approach is about as secure but a bit simpler than the mutual TLS approach. I agree that using an infinite TTL for the secret-id is a less secure practice for Production environments.
Thanks to Mr. Thatcher for thinking this one through.
This is probably not the canonnical answer, but I found it empty so decided to add some pointers.
As per Hashicorp Vault AppRole: role-id and secret-id:
Additional brownie information: Ideally, it's best practice to keep
the TTL low, 30 minutes max - if your application is stateful, or
maybe even less if it's a stateless application. The secret key of
Vault approle should also be rotated every 90 days. Please note by
default, Vault approle backend has 31 days of TTL, so if you want to
set it to 90 days, you need to increase TTL of the approle backend as
well.
However (in the same question):
You can generate secret-id with indefinite validity. But doing so will
be as good as keeping your secrets in the configuration file.
For ephemeral instances you can use configuration management to pass in secrets via a third (broker) role. With regard to a server that exists indefinitely, i'm still working that out...
Ideas:
TLS certificates might work well on Windows, don't know about Linux.
GitHub Personal Access Tokens, but this is not org. friendly.
Review the other auth methods available to see if there's one that fits your requirements (e.g. AWS).

Servicefabric cluster certificate rollover and Add-AzServiceFabricClusterCertificate

I have a servicefabric cluster deployed (uses thumbprint not commonname), whose cluster certificate is close to expiring. I am a bit confused about the process for adding new certificate and making the rollover.
There is this article that sheds light on it
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-rollover-cert-cn
It mentions that using commonnames makes the process easier, but doesnt mention how commonname based rollover is easier?
I have also seen this command
Add-AzServiceFabricClusterCertificate - This can create the certificate in Keyvault and update servicefabric cluster too.
My questions are:
Is this a replacement for process described in the article above?
Can this be used for certificate rollover?
Once the new certificate is added is the rollover automatic?
https://learn.microsoft.com/en-us/powershell/module/az.servicefabric/add-azservicefabricclustercertificate?view=azps-2.0.0
An update. This command (Add-AzServiceFabricClusterCertificate) did the trick. It updated the servicefabric cluster/vm scaleset and added a secondary certificate. I was able to swap the secondary and primary certificate as a second operation.

How does kubectl being authorized?

I have been confused for a long time about how the user of kubectl being authorized. I bootstrap a k8s cluster from scratch and use 'RBAC' as the authorization mode. The user kubectl used is authenticated by certificate first, then it should be authorized by RBAC when accessing the api-server. I did nothing about granting permissions to the user, however, it is allowed to access all the apis(creating pod or listing pods).
Kubernetes has no built in user management system. It expects you to implement that part on your own. In this sense, a common way to implement user auth is to create a certificate sign request and have it signed by the cluster certificate authority. By reading that newly generated certificate, the cluster will extract the username and the groups it belongs to. Then, after that, it will apply the RBAC policies you implemented. In this sense, if the user can access everything, then it can be one of the following:
You are still using the admin user account instead of the newly created user account.
The user account you created belongs to an admin group
You did not enable RBAC correctly
This guide should help you with an easy example of user auth in Kubernetes: https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/

How to revoke signed certificate in Kubernetes cluster?

kube-apiserver does not seem to provide an option to use a certification revocation list (CRL).
Is there a way to revoke a client certificate if it's lost or not used anymore?
As far as I know there isn't a way to directly revoke certificates via a CRL. However, what does work, and what we are currently using, is ABAC policies to identify users (set via the Common Name of a certificate), and whether they have access to a given resource on Kubernetes.
As an example, say you have a user called "random". You would generate a Client Certificate for them from your given Certificate Authority, with a Common Name of "random".
From there, you can have an ABAC policy file (a csv file with each line being a bit of JSON), with permissions set for user "random" that would provide them with a certain level of access to the Kubernetes API. You can have them have access to everything or certain namespaces or other API parameters. If you need to revoke permissions, you simply delete that user from the ABAC policy file. We've tested this, and it works well. The unfortunate thing, I will say, is you have to restart the Kubernetes API service for those changes to take effect, so there may be a few seconds of downtime for this change to occur. Obviously in a development environment this isn't a big deal, but on production you may need to schedule time for users to be added.
Hopefully in the future a simple "kube-apiserver reload" will allow for a re-read of that ABAC policy file.
One final thing to note: when using Client Certificates for ABAC authentication, you will need to set permissions for users INDIVIDUALLY. Unlike with auth tokens with ABAC, you cannot set Client Certificate users in "groups." Something that caused us headaches, so figured it was worth passing on. :)
Hope this helps!