I am having issues with service accounts. I created a service account and then created .key and .crt using this guide:
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
I used cluster_ca.key and cluster_ca.crt from KOPS_STATE_STORE bucket (since I used kops to create the cluster) to create user ca.crt and ca.key. Then I got token from secret.
I set the context like this:
kubectl config set-cluster ${K8S_CLUSTER_NAME} --server="${K8S_URL}" --embed-certs=true --certificate-authority=./ca.crt
kubectl config set-credentials gitlab-telematics-${CI_COMMIT_REF_NAME} --token="${K8S_TOKEN}"
kubectl config set-context telematics-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-telematics-${CI_COMMIT_REF_NAME}
kubectl config use-context telematics-dev-context
When I do the deployment using that service account token I get the following error:
error: unable to recognize "deployment.yml": Get https://<CLUSTER_ADDRESS>/api?timeout=32s: x509: certificate signed by unknown authority
The Service Account, Role and RoleBinding YAML:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-telematics-dev
namespace: telematics-dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: telematics-dev-full-access
namespace: telematics-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: telematics-dev-view
namespace: telematics-dev
subjects:
- kind: ServiceAccount
name: gitlab-telematics-dev
namespace: telematics-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: telematics-dev-full-access
The generated kubeconfig looks fine to me:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <REDACTED>
server: https://<CLUSTER_ADDRESS>
name: <CLUSTER_NAME>
contexts:
- context:
cluster: <CLUSTER_NAME>
user: gitlab-telematics-dev
name: telematics-dev-context
current-context: telematics-dev-context
kind: Config
preferences: {}
users:
- name: gitlab-telematics-dev
user:
token: <REDACTED>
I managed to solve this. Sorry for the late answer. Posting this in case someone else is facing the same issue.
The following line is not needed:
kubectl config set-cluster ${K8S_CLUSTER_NAME} --server="${K8S_URL}" --embed-certs=true --certificate-authority=./ca.crt
As we are issuing tokens, only the token can be used.
It is hard to help you with this case. I reproduced this on my test cluster and I can't come up with any advice other than following the step by step tutorial by Bitnami and double checking the names. I was able to successfully create the user gitlab-telematics-dev list pods and then create a deployment in the telematics-dev namespace using just your manifests and linked tutorial so the problem is not in the config or names in Roles etc. This seems to me like you had to miss something in the process.
What I can advice is to first try the commands as the created user. So when you will be able to list pods and create a deployment as gitlab-telematics-dev then your deployment should also work.
Related
Friends, I'm new to Kubernetes and recently installed Kubernetes manually through a tutorial,execute the command:kubectl exec -it -n kube-system coredns-867b8c5ddf-8xfz6 -- sh,an error occurred: "x509: certificate signed by unknown authority",kubectl log command will also report the same error,but kubectl get nodes and kubectl get podes can get node information normally,This is the step for me to configure RBAC authorization to allow the kube-api server to access the kubelet API on each worker node:
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
This is admin.kubeconfig content:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t******tLQo=
server: https://127.0.0.1:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0t******Cg==
client-key-data: LS0tL******LQo=
The content in "~/.kube/config" is the same as the content in admin.kubeconfig. I went to check and confirmed that my certificate has not expired. It seems that the Token authentication of the dashboard is also affected by this problem and cannot pass,my system's CentOS7.7. The kubernetes component version is 1.22.4. I hope to get help.
I have this strange situation, how I can solve this problem ?
ubuntu#anth-mgt-wksadmin:~$ kubectl get nodes
error: the server doesn't have a resource type "nodes"
ubuntu#anth-mgt-wksadmin:~$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
error: the server doesn't have a resource type "services"
ubuntu#anth-mgt-wksadmin:~$ kubectl cluster-info dump
Error from server (Forbidden): users "xxx#xxx.it" is forbidden: User system:serviceaccount:gke-connect:connect-agent-sa" cannot impersonate resource "users" in API group "" at the cluster scope
I think that the problem has been generated by this following apply searching a way to connect the admin cluster to Cloud Console but how to rollback ?
USER_ACCOUNT=foo#example.com
cat <<EOF > /tmp/impersonate.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: gateway-impersonate
rules:
- apiGroups:
- ""
resourceNames:
- ${USER_ACCOUNT}
resources:
- users
verbs:
- impersonate
- --
- apiVersion: rbac.authorization.k8s.io/v1
- kind: ClusterRoleBinding
metadata:
name: gateway-impersonate
roleRef:
kind: ClusterRole
name: gateway-impersonate
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: connect-agent-sa
namespace: gke-connect
EOF
# Apply impersonation policy to the cluster.
kubectl apply -f /tmp/impersonate.yaml
I have copied the admin.conf file from one admin cluster node to the admin workstation and renamed to kubeconfig
root#anth-admin-host1:~# cat /etc/kubernetes/admin.conf apiVersion: v1 clusters:
I am experimenting with service accounts and user accounts.
I am able to create CA / Key for user accounts in order to be able to verify the user through the Server-API but I am failing to do the same for Service Accounts.
I have created a kubeconfig file:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ca.crt
server: https://ip:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: namespace-test
user: test
name: test-kubernetes
current-context: "test-kubernetes"
kind: Config
preferences: {}
users:
- name: test
user:
client-certificate: test.crt
client-key: test.key
When I am using this kubeconfig file and based on the RBAC rules I can reach the Server-Api:
$ kubectl --kubeconfig /tmp/rbac-test/test.kubeconfig get pods
No resources found in namespace-test namespace.
Sample of file that I create the name space, service account etc.
apiVersion: v1
kind: Namespace
metadata:
name: namespace-test
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
user: test
name: test
namespace: namespace-test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
user: test
name: role-test
namespace: namespace-test
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
user: test
name: rolebinding-test
namespace: namespace-test
subjects:
- kind: User
name: test
roleRef:
kind: Role
name: role-test
apiGroup: rbac.authorization.k8s.io
When I modify the user to service account user I loose control over the namespace:
subjects:
- kind: ServiceAccount
Then I try to get the pods and I get forbitten:
$ kubectl --kubeconfig /tmp/rbac-test/test.kubeconfig get pods
Error from server (Forbidden): pods is forbidden: User "test" cannot list resource "pods" in API group "" in the namespace "namespace-test"
But when I check if the service account can fetch the pods it is valid:
$ kubectl auth can-i get pods --as system:serviceaccount:namespace-test:test -n namespace-test
yes
Is there any way to retrieve or create CAs for service account users? I want to be able to connect outside the cluster through the Server-Api and at the same time to use a service account and not a normal user.
The reason that I want to use a service account and not a user is to be able to use the Dashboard through different users with token verification.
I am trying to access all the namespaces and pods from my another pod. So, I have created clusterrole, clusterrolebinding and service account. I am able access the only customer namespace resources. But I need to access all the namespace resources. Is it possible?
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinupcontainers
namespace: customer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
namespace: customer
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinupcontainers
namespace: customer
subjects:
- kind: ServiceAccount
name: spinupcontainers
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: rbac.authorization.k8s.io
Could anyone help to resolve this problem?
Thanks in advance
It seems in your YAML example you are using a RoleBinding as opposed to a ClusterRoleBinding. A RoleBinding only grants those permissions inside of a namespace. See also the Kubernetes Documentation on this topic:
A RoleBinding grants permissions within a specific namespace whereas a
ClusterRoleBinding grants that access cluster-wide.
Most important thing is that you have to connect your service account to your cluster role with proper cluster role binding. Because binding types decide that scope of service account abilities. Under these circumstances, you have to describe cluster role binding as shown below;
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
subjects:
- kind: ServiceAccount
name: spinupcontainers
namespace: customer
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: "rbac.authorization.k8s.io"
If you want to test this within the pod you would describe respective service account for pod like below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "4800"
image: busybox:1.28
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
serviceAccountName: default
status: {}
And then finally you need to ssh to pod and can execute proper curl command with using service account token. Do not forget that you can find the token file in pod by defined service account to pod yaml before (in /var/run/secrets/kubernetes.io/serviceaccount). After that you have to execute API call to use kubernetes API server service (ıf you used kubeadm to create the cluster. It has been already defined in default namespace as named kubernetes). In the below, you can find proper apı call to get default namespace secrets
curl -k -H "Authorization: Bearer $TOKEN" https://<kubernetes-apı-fqdn>/api/v1/namespaces/default/secrets
I'm trying to lock down a namespace in kubernetes using RBAC so I followed this tutorial.
I'm working on a baremetal cluster (no minikube, no cloud provider) and installed kubernetes using Ansible.
I created the folowing namespace :
apiVersion: v1
kind: Namespace
metadata:
name: lockdown
Service account :
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-lockdown
namespace: lockdown
Role :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
rules:
- apiGroups: [""] # "" indicates the core API group
resources: [""]
verbs: [""]
RoleBinding :
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
And finally I tested the authorization using the next command
kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown
This SHOULD be returning "No" but I got "Yes" :-(
What am I doing wrong ?
Thx
A couple possibilities:
are you running the "can-i" check against the secured port or unsecured port (add --v=6 to see). Requests made against the unsecured (non-https) port are always authorized.
RBAC is additive, so if there is an existing clusterrolebinding or rolebinding granting "get pods" permissions to that service account (or one of the groups system:serviceaccounts:lockdown, system:serviceaccounts, or system:authenticated), then that service account will have that permission. You cannot "ungrant" permissions by binding more restrictive roles
I finally found what was the problem.
The role and rolebinding must be created inside the targeted namespace.
I changed the following role and rolebinding types by specifying the namespace inside the yaml directly.
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: lockdown
namespace: lockdown
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rb-lockdown
namespace: lockdown
subjects:
- kind: ServiceAccount
name: sa-lockdown
roleRef:
kind: Role
name: lockdown
apiGroup: rbac.authorization.k8s.io
In this example I gave permission to the user sa-lockdown to get, watch and list the pods in the namespace lockdown.
Now if I ask to get the pods : kubectl auth can-i get pods --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return yes.
On the contrary if ask to get the deployments : kubectl auth can-i get deployments --namespace lockdown --as system:serviceaccount:lockdown:sa-lockdown it will return no.
You can also leave the files like they were in the question and simply create them using kubectl create -f <file> -n lockdown.