after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla
Related
I am using Vault CSI Driver on Charmed Kubernetes v1.19 where I'm trying to retrieve secrets from Vault for a pod running in a separate namespace (webapp) with its own service account (webapp-sa) following the steps in the blog.
As I have been able to understand so far, the Pod is trying authenticate to the Kubernetes API, so that later it can generate a Vault token to access the secret from Vault.
$ kubectl get po webapp
NAME READY STATUS RESTARTS AGE
webapp 0/1 ContainerCreating 0 22m
It appears to me there's some issue authenticating with the Kubernetes API.
The pod remains stuck in the Container Creating state with the message - failed to create a service account token for requesting pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned webapp/webapp to host-03
Warning FailedMount 4m38s (x23 over 35m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod webapp/webapp, err: rpc error: code = Unknown desc = error making mount request: **failed to create a service account token for requesting pod** {webapp xxxx webapp webapp-sa}: the server could not find the requested resource
I can get the vault token using cli in the pod namespace:
$ vault write auth/kubernetes/login role=database jwt=$SA_JWT_TOKEN
Key Value
--- -----
token <snipped>
I do get the vault token using the API as well:
$ curl --request POST --data #payload.json https://127.0.0.1:8200/v1/auth/kubernetes/login
{
"request_id":"1234",
<snipped>
"auth":{
"client_token":"XyZ",
"accessor":"abc",
"policies":[
"default",
"webapp-policy"
],
"token_policies":[
"default",
"webapp-policy"
],
"metadata":{
"role":"database",
"service_account_name":"webapp-sa",
"service_account_namespace":"webapp",
"service_account_secret_name":"webapp-sa-token-abcd",
"service_account_uid":"123456"
},
<snipped>
}
}
Reference: https://www.vaultproject.io/docs/auth/kubernetes
As per the vault documentation, I've configured Vault with the Token Reviewer SA as follows:
$ cat vault-auth-service-account.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-token-review-binding
namespace: vault
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth
namespace: vault
Vault is configured with JWT from the Token Reviewer SA as follows:
$ vault write auth/kubernetes/config \
token_reviewer_jwt="< TOKEN Reviewer service account JWT>" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
kubernetes_ca_cert=#ca.crt
I have defined a Vault Role to allow the webapp-sa access to the secret:
$ vault write auth/kubernetes/role/database \
bound_service_account_names=webapp-sa \
bound_service_account_namespaces=webapp \
policies=webapp-policy \
ttl=72h
Success! Data written to: auth/kubernetes/role/database
The webapp-sa is allowed access to the secret as per the Vault Policy defined as follows:
$ vault policy write webapp-policy - <<EOF
> path "secret/data/db-pass" {
> capabilities = ["read"]
> }
> EOF
Success! Uploaded policy: webapp-policy
Pod and it's SA is defined as follows:
$ cat webapp-sa-and-pod.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: webapp-sa
---
kind: Pod
apiVersion: v1
metadata:
name: webapp
spec:
serviceAccountName: webapp-sa
containers:
- image: registry/jweissig/app:0.0.1
name: webapp
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
providerName: vault
secretProviderClass: "vault-database"
Does anyone have any clue as to why the Pod won't authenticate with
the Kubernetes API?
Do I have to enable flags on the kube-apiserver for Token Review API
to work?
Is it enabled by default on Charmed Kubernetes v1.19?
Would be grateful for any help.
Regards,
Sana
Normally you'd do ibmcloud login ⇒ ibmcloud ks cluster-config mycluster ⇒ copy and paste the export KUBECONFIG= and then you can run your kubectl commands.
But if this were being done for some automated devops pipeline outside of IBM Cloud, what is the method for getting authenticating and getting access to the cluster?
You should not copy your kubeconfig to the pipeline. Instead you can create a service account with permissions to a particular namespace and then use its credentials to access the cluster.
What I do is create a service account and role binding like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-tez-dev # account name
namespace: tez-dev #namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-full-access #role
namespace: tez-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"] #resources to which permissions are granted
verbs: ["*"] # what actions are allowed
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-view
namespace: tez-dev
subjects:
- kind: ServiceAccount
name: gitlab-tez-dev
namespace: tez-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tez-dev-full-access
Then you can get the token for the service account using:
kubectl describe secrets -n <namespace> gitlab-tez-dev-token-<value>
The output:
Name: gitlab-tez-dev-token-lmlwj
Namespace: tez-dev
Labels: <none>
Annotations: kubernetes.io/service-account.name: gitlab-tez-dev
kubernetes.io/service-account.uid: 5f0dae02-7b9c-11e9-a222-0a92bd3a916a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1042 bytes
namespace: 7 bytes
token: <TOKEN>
In the above command, namespace is the namespace in which you created the account and the value is the unique value which you will see when you do
kubectl get secret -n <namespace>
Copy the token to your pipeline environment variables or configuration and then you can access it in the pipeline. For example, in gitlab I do (only the part that is relevant here):
k8s-deploy-stage:
stage: deploy
image: lwolf/kubectl_deployer:latest
services:
- docker:dind
only:
refs:
- dev
script:
######## CREATE THE KUBECFG ##########
- kubectl config set-cluster ${K8S_CLUSTER_NAME} --server=${K8S_URL}
- kubectl config set-credentials gitlab-tez-dev --token=${TOKEN}
- kubectl config set-context tez-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-tez-dev --namespace=tez-dev
- kubectl config use-context tez-dev-context
####### NOW COMMANDS WILL BE EXECUTED AS THE SERVICE ACCOUNT #########
- kubectl apply -f deployment.yml
- kubectl apply -f service.yml
- kubectl rollout status -f deployment.yml
The KUBECONFIG environment variable is a list of paths to Kubernetes configuration files that define one or more (switchable) contexts for kubectl (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Copy your Kubernetes configuration file to your pipeline agent (~/.kube/config by default) and optionally set the KUBECONFIG environment variable. If you got different contexts in your config file, you may want to remove the ones you don't need in your pipeline before copying it or switch contexts using kubectl config use-context.
Everything you need to connect to your kube api server is inside that config, certs, tokens etc.
If you don't want to copy a token into a file or want to use the API to automate the retrieval of the token, you can also execute some POST commands in order to programmatically retrieve your user token.
The full docs for this are here: https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install#kube_api
The key piece is retrieving your id token with the POST https://iam.bluemix.net/identity/token call.
The body will return an id_token that you can use in your Kubernetes API calls.
I'm trying to setup some basic Authorization and Authentication for various users to access a shared K8s cluster.
Requirement: Multiple users can have access to multiple namespaces with a separate set of cert and keys for each of them.
Proposal:
openssl genrsa -out $PRIV_KEY 2048
# Generate CSR
openssl req -new -key $PRIV_KEY -out $CSR -subj "/CN=$USER"
# Create k8s CSR
K8S_CSR=user-request-$USER-$NAMESPACE_NAME-admin
cat <<EOF >./$K8S_CSR.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: $K8S_CSR
spec:
groups:
- system:authenticated
request: $(cat $CSR | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- client auth
EOF
kubectl create -n $NAMESPACE_NAME -f $K8S_CSR.yaml
# Approve K8s CSR
kubectl certificate approve $K8S_CSR
# Fetch User Certificate
kubectl get csr $K8S_CSR -o jsonpath='{.status.certificate}' | base64 -d > $USER-$NAMESPACE_NAME-admin.crt
# Create Admin Role Binding
kubectl create rolebinding $NAMESPACE_NAME-$USER-admin-binding --clusterrole=admin --user=$USER --namespace=$NAMESPACE_NAME
Problem:
The user cert and/or pkey are not specific to this namespace.
If I just create another rolebinding for the same user in a different namespace he will able to authenticate. How do I prevent that from happening?
The CA in Kubernetes is a cluster-wide CA (not namespaced) so you won't be able to create certs tied to a specific namespace.
system:authenticated and system:unauthenticated are built-in groups in Kubernetes to identify whether a Role or ClusterRole is authenticated. And you can't directly manager groups or users in Kubernetes. You will have to configure an alternative cluster authentication methods to take advantage of users and groups. For example, a static token file or OpenID
Then you can, restrict users or groups defined in your identity provider to a Role that doesn't allow them to create either another Role or RoleBinding, that way they are not able to give themselves access to other namespaces and only the cluster-admin is the one who decides which RoleBindings (or namespaces) that specific user is part of.
For example, in your Role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: mynamespace
name: myrole
rules:
- apiGroups: ["extensions", "apps"]
resources: ["deployments"] <== never include role, clusterrole, rolebinding, and clusterrolebinding.
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Another alternative is to restrict base on a Service Account Token by RoleBinding to the service account.
There is a default ClusterRoleBinding named cluster-admin.
When I run kubectl get clusterrolebindings cluster-admin -o yaml I get:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2018-06-13T12:19:26Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "98"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: 0361e9f2-6f04-11e8-b5dd-000c2904e34b
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
In the subjects field I have:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
How can I see the members of the group system:masters ?
I read here about groups but I don't understand how can I see who is inside the groups as the example above with system:masters.
I noticed that when I decoded /etc/kubernetes/pki/apiserver-kubelet-client.crt using the command:
openssl x509 -in apiserver-kubelet-client.crt -text -noout it contained the subject system:masters but I still didn't understand who are the users in this group:
Issuer: CN=kubernetes
Validity
Not Before: Jul 31 19:08:36 2018 GMT
Not After : Jul 31 19:08:37 2019 GMT
Subject: O=system:masters, CN=kube-apiserver-kubelet-client
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
Answer updated:
It seems that there is no way to do it using kubectl. There is no object like Group that you can "get" inside the Kubernetes configuration.
Group information in Kubernetes is currently provided by the Authenticator modules and usually it's just string in the user property.
Perhaps you can get the list of group from the subject of user certificate or if you use GKE, EKS or AKS the group attribute is stored in a cloud user management system.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://kubernetes.io/docs/reference/access-authn-authz/authentication/
Information about ClusterRole membership in system groups can be requested from ClusterRoleBinding objects. (for example for "system:masters" it shows only cluster-admin ClusterRole):
Using jq:
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters")'
If you want to list the names only:
kubectl get clusterrolebindings -o json | jq -r '.items[] | select(.subjects[0].kind=="Group") | select(.subjects[0].name=="system:masters") | .metadata.name'
Using go-templates:
kubectl get clusterrolebindings -o go-template='{{range .items}}{{range .subjects}}{{.kind}}-{{.name}} {{end}} {{" - "}} {{.metadata.name}} {{"\n"}}{{end}}' | grep "^Group-system:masters"
Some additional information about system groups can be found in GitHub issue #44418 or in RBAC document:
Admittedly, late to the party here.
Have a read through the Kubernetes 'Authenticating' docs. Kubernetes does not have an in-built mechanism for defining and controlling users (as distinct from ServiceAccounts which are used to provide a cluster identity for Pods, and therefore services running on them).
This means that Kubernetes does not therefore have any internal DB to reference, to determine and display group membership.
In smaller clusters, x509 certificates are typically used to authenticate users. The API server is configured to trust a CA for the purpose, and then users are issued certificates signed by that CA. As you had noticed, if the subject contains an 'Organisation' field, that is mapped to a Kubernetes group. If you want a user to be a member of more than one group, then you specify multiple 'O' fields. (As an aside, to my mind it would have made more sense to use the 'OU' field, but that is not the case)
In answer to your question, it appears that in the case of a cluster where users are authenticated by certificates, your only route is to have access to the issued certs, and to check for the presence of the 'O' field in the subject. I guess in more advanced cases, Kubernetes would be integrated with a centralised tool such as AD, which could be queried natively for group membership.
In EKS the system:masters group is mapped to IAM roles in the aws-auth ConfigMap
kubectl get cm -n kube-system aws-auth -oyaml | yq '.data.mapRoles' | yq -P
I want to limit the permissions to the following service account, created it as follows:
kubectl create serviceaccount alice --namespace default
secret=$(kubectl get sa alice -o json | jq -r .secrets[].name)
kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt
user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -d)
c=`kubectl config current-context`
name=`kubectl config get-contexts $c | awk '{print $3}' | tail -n 1`
endpoint=`kubectl config view -o jsonpath="{.clusters[?(#.name == \"$name\")].cluster.server}"`
kubectl config set-cluster cluster-staging \
--embed-certs=true \
--server=$endpoint \
--certificate-authority=./ca.crt
kubectl config set-credentials alice-staging --token=$user_token
kubectl config set-context alice-staging \
--cluster=cluster-staging \
--user=alice-staging \
--namespace=default
kubectl config get-contexts
#kubectl config use-context alice-staging
This has permission to see everything with:
kubectl --context=alice-staging get pods --all-namespaces
I try to limit it with the following, but still have all the permissions:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access
rules:
- apiGroups: [""]
resources: [""]
verbs: [""]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: no-access-role
subjects:
- kind: ServiceAccount
name: alice
namespace: default
roleRef:
kind: ClusterRole
name: no-access
apiGroup: rbac.authorization.k8s.io
The idea is to limit access to a namespace to distribute tokens for users, but I do not get it ... I think it may be for inherited permissions but I can not disabled for a single serviceacount.
Using: GKE, container-vm
THX!
Note that service accounts are not meant for users, but for processes running inside pods (https://kubernetes.io/docs/admin/service-accounts-admin/).
In Create user in Kubernetes for kubectl you can find how to create a user account for your K8s cluster.
Moreover, I advise you to check whether RBAC is actually enabled in your cluster, which could explain that a user can do more operations that expected.