I have created a token using the command init. That token does not was create with the default ttl, and now it expired. It is possible to revalidate a expired token in kubernetes?
If I understand you correctly than yes, it is possible.
Take a look at the official documentation.
The token is used for mutual authentication between the control-plane
node and the joining nodes. The token included here is secret. Keep it
safe, because anyone with this token can add authenticated nodes to
your cluster. These tokens can be listed, created, and deleted with
the kubeadm token command. See the kubeadm reference guide.
From there you can use the kubeadm token generate [flags] command.
This command will print out a randomly-generated bootstrap token that
can be used with the “init” and “join” commands.
Please let me know if that helped.
Related
I started working in kubernetes and notice that there is one secret available in each namespace
# kubectl get secret
NAME TYPE DATA AGE
default-token-b2rzn kubernetes.io/service-account-token 3 506d
this default-token-XXXX is token for service account with used by making kube-api call.
I have to do same type of thing, like we have some Third-party API. To access that API, we need token, and that token expire every 12 hours. I am thinking to create new secret as ourapi-token-XXXX, which will hold the token and there might be CronJob or Daemon in kubernetes which will check its expire time and update the value.
Lets take example of AWS or GCP API Token. This need to be renew automatically.
Goal is, when you try to access Third-party API, you don't need to generate token manually and get the valid token from kubernetes secrets.
We generate the kubeconfig for kubernetes cluster from a web UI. Some users are complaining that their kubeconfig file is not working. We need to know the expiry date of the token from kubeconfig file. We would want to advise the users to regenerate the kubeconfig if we know how long the kubeconfig is valid.
you can verify the configured expiry time of the kubeconfig token within the Rancher UI, under API & Keys . Once the token expires, you will be prompted to log in again upon executing kubectl commands against the cluster.
Please find the document for more information.
Does anyone know if this change https://github.com/kubernetes/kubernetes/pull/72179 will affect the service account tokens and their ability not to expire? Right now we have such tokens in our CI/CD and we rely that these will not expire.
According to this
This only changes the source of the credentials used by the controller loops started by the kube-controller-manager process. Existing use of tokens retrieved from secrets is not affected.
I'm trying to get login/pass authentication working on Vault.
When I try the method given in the API documentation here: https://www.vaultproject.io/api/auth/userpass/index.html#login
I get this error:
$ curl --request POST --data #payload.json https://<myurl>:8200/v1/auth/userpass/login/<mylogin> -k
{"errors":["missing client token"]}
And I can't find information on this error. It makes me wonder what happens, because I want to authenticate with login/pass to get the token, so that's just normal to not have it.
Here is the content of the payload.json:
{
"password": "foo"
}
Is there any way to login with username/password? This is the only fallback method I have when the user does not know its token.
Thanks!
OK, so I figured it out by trials.
So the userpass AUTH was indeed disabled. I have to use LDAP auth. With the Vault-UI that is installed, I managed to find the URL to authenticate. If was the following : https://******:8200/v1/auth/<ldap>/login/<user>
And that way it's working.
Unfortunately, it does not help in the end. The idea was to synchronize Vault data locally, but the Vault API is really not built for that kind of access. It requires a LOT of requests, and end up being very slow for a few secrets synchronized.
Make sure you are logging in under the correct namespace. You will get this error if your authentication method is enabled under something other than the default namespace that your CLI tool is using.
You can specify the namespace with the -ns=my/namespace/ parameter or the VAULT_NAMESPACE environment variable.
For example, if your namespace is "desserts/icecream"
vault login -ns=desserts/icecream/ -method=userpass username=ian
# OR
export VAULT_NAMESPACE=desserts/icecream/
vault login -method=userpass username=ian
In my case, i was not setting the vault token to the right environment variable.
you have to set the value to VAULT_TOKEN so that it uses it in subsequent request my env variable was Vault_Token and due to this it was always saying missing client token.
By default, Vault checks for this environment variable to find the token.
vault kv get --field "ACCESS_KEY_ID" secret/my-secret
I have been confused for a long time about how the user of kubectl being authorized. I bootstrap a k8s cluster from scratch and use 'RBAC' as the authorization mode. The user kubectl used is authenticated by certificate first, then it should be authorized by RBAC when accessing the api-server. I did nothing about granting permissions to the user, however, it is allowed to access all the apis(creating pod or listing pods).
Kubernetes has no built in user management system. It expects you to implement that part on your own. In this sense, a common way to implement user auth is to create a certificate sign request and have it signed by the cluster certificate authority. By reading that newly generated certificate, the cluster will extract the username and the groups it belongs to. Then, after that, it will apply the RBAC policies you implemented. In this sense, if the user can access everything, then it can be one of the following:
You are still using the admin user account instead of the newly created user account.
The user account you created belongs to an admin group
You did not enable RBAC correctly
This guide should help you with an easy example of user auth in Kubernetes: https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/