My team recently found out that the default service account, managed by K8S and associated by default to pods, had full read and write permissions in the cluster. We could list secrets from the running pods, create new pods....
We found this strange, as we thought the default service account had no permissions whatsoever or even just read permissions. So we decided to search through the cluster for role bindings or cluster role bindings associated with that service account, but we could find none.
In a K8S cluster, doesn't the default service account have a basic role binding associated with it? Why don't we have any? And if we don't have any, why does the service account have full permissions on the cluster, instead of none at all? Lastly, how can we modify it so it has no permissions in the cluster?
Just to make it clear: we have multiple namespaces in our cluster, each one having its own default service account. However, none of them have any role bindings associated with them and they all have full cluster permissions.
Apparently, by default, kops sets up clusters with the K8S API server authorization mode set to AlwaysAllow, meaning any request, as long as it is successfully authenticated, has global admin permissions.
In order to fix this, we had to change the authorization mode to RBAC and manually tweak the permissions.
Thank you to #ArthurBusser for pointing it out!
You just have to look through your RoleBindings/ClusterRoleBindings. Probably there's a default SA somewhere.
Unfortunately there is no built in solution to search for ClusterRoles of a user, but you can use below script
function getRoles() {
local kind="${1}"
local name="${2}"
local namespace="${3:-}"
kubectl get clusterrolebinding -o json | jq -r "
.items[]
|
select(
.subjects[]?
|
select(
.kind == \"${kind}\"
and
.name == \"${name}\"
and
(if .namespace then .namespace else \"\" end) == \"${namespace}\"
)
)
|
(.roleRef.kind + \"/\" + .roleRef.name)
"
}
$ getRoles Group system:authenticated
ClusterRole/system:basic-user
ClusterRole/system:discovery
$ getRoles ServiceAccount attachdetach-controller kube-system
ClusterRole/system:controller:attachdetach-controller
Related
I'm trying to set up RBAC in Kubernetes.
In my cluster, there are some default Roles like admin, cluster-admin and edit.
Those Roles differentiate between (e.g.) a deployment and deployment/status.
When I look at the k8s API reference (https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/deployment-v1/#get-read-the-specified-deployment) the description for these is identical, except for the endpoint in the request.
They have the same parameters (name, namespace, pretty) and have the same return value type (Deployment).
Why would I want to grant someone the right to get the deployment but not the deployment/status or vice versa?
The same probably goes for all the other /status endpoints...
For example, I don't want this user to :
Edit Cluster
Edit Deployment
Edit ig
Delete Pods
...
But Allow this user to:
Get nodes
Get pods
Describe Pods
If I use RBAC, can I have guidance?
you will need to use RBAC for that, after creating a user you will need to create (ROLE or CLUSTER ROLE depends if you want it to apply to a specific namespace or not) and then create (ROLE BINDING or CLUSTER ROLE BINDING) and bind between the user and the role you created.
you can find it all here https://kubernetes.io/docs/reference/access-authn-authz/rbac/
I want to implement RBAC for each user. Already have OIDC running and I can see my user credentials being saved in kube config.
But to check my rolebindings, i have to run the command as
kubectl get pods --as=user#email.com, even though I am logged in as user#email.com (through gcloud init).
I am an owner account in our cloud but I was assuming the RBAC limitations should still work.
Apart from credentials, you should configure a kubectl context to associate this credentials with the cluster. And to set it as the default context:
First, list kubectl clusters with k config get-clusters
Then create a new context:
kubectl config set-context my-new-context --cluster <CLUSTER NAME> --user="user#email.com"
And finally configure the new context as default:
kubectl config use-context my-new-context
I am an owner account in our cloud but I was assuming the RBAC limitations should still work.
RBAC is additive only. If you have permissions via another configured authorizer, you will still have those permissions even if you have lesser permissions via RBAC.
I've got a username and password, how do I authenticate kubectl with them?
Which command do I run?
I've read through: https://kubernetes.io/docs/reference/access-authn-authz/authorization/ and https://kubernetes.io/docs/reference/access-authn-authz/authentication/ though can not find any relevant information in there for this case.
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/
The above does not seem to work:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.
The syntax you've listed appears right, assuming that the cluster supports basic authentication.
The error you're seeing is similar to the one here which may suggest that the cluster you're using doesn't currently support the authentication method you're using.
Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.
You should have a group set for the authenticating user.
Example:
password1,user1,userid1,system:masters
password2,user2,userid2
Reference:
"Use a credential with the system:masters group, which is bound to the cluster-admin super-user role by the default bindings."
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
1.6+ sees a lot of changes revolving around RBAC and ABAC. However, what is a little quirky is not being able to access the dashboard etc. by default as previously possible.
Access will result in
User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched."
Documentation at the k8s docs is plenty, but not really stating how to gain access practically, as creator of a cluster, to become cluster-admin
What is a practical way to authenticate me as cluster-admin?
By far the easiest method is to use the credentials​ from /etc/kubernetes/admin.conf (this is on your master if you used kubeadm) . Run kubectl proxy --kubeconfig=admin.conf on your client and then you can visit http://127.0.0.1:8001/ui from your browser.
You might need to change the master address in admin.conf after you copied to you client machine.