Unexpected token showing up for custom ServiceAccount - kubernetes

I have set up a custom ServiceAccount with a ClisterRole binding. I can see the account and it's ca.crt, namespace and token.
Definition:
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: my-app
name: my-app-svcaccount
namespace: my-app-ns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: my-app-svcaccount
namespace: my-app-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- kind: ServiceAccount
name: my-app-svcaccount
namespace: my-app-ns
In my pod spec I have specified the serviceAccountName and expect it to be mounted into the pod. When I go into the pod, I see the /run/secrets/kubernetes.io/serviceaccount/ folder as expected.
Definition within deployment pod spec:
serviceAccountName: my-app-svcaccount
However, the token in that folder is not the one from my serviceAccountName, nor of any other secret in my namespace. I'm pulling my hair as to what could be the reason for this. Where can this token be coming from and how can I find out where the incorrect token is coming from?
What I can see is that the mounted volume name does not refer to the ServiceAccount name, but rather to kube-api-access-... which is unknown to me where it's coming from.
Thanks in advance.

Related

RoleBinding not granting permissions

I have the following RoleBinding (it was deployed by Helm:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
meta.helm.sh/release-name: environment-namespaces
meta.helm.sh/release-namespace: namespace-metadata
creationTimestamp: "2021-04-23T17:16:50Z"
labels:
app.kubernetes.io/managed-by: Helm
name: SA-DevK8s-admin
namespace: dev-my-product-name-here
resourceVersion: "29221536"
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/dev-my-product-name-here/rolebindings/SA-DevK8s-admin
uid: 4818d6ed-9320-408c-82c3-51e627d9f375
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: SA-DevK8s#mydomain.com
When I login to the cluster as SA-DevK8s#mydomain.com and run kubectl get pods -n dev-my-product-name-here it get the following error:
Error from server (Forbidden): pods is forbidden: User "sa-devk8s#mydomain.com" cannot list resource "pods" in API group "" in the namespace "dev-my-product-name-here"
Shouldn't a user who has the ClusterRole of admin in a namespace be able to list the pods for that namespace?
Case Matters!!!!
Once I changed the user to be sa-devk8s#mydomain.com (instead of SA-DevK8s#mydomain.com), it all started working correctly!

How to restrict a user to one namespace on kubernetes Dashboard?

I have a custom role related to a specific namespace. I want to create a service account that will have access to the Dashboard and only being able to see this namespace assigned to that role.
I have tried the following:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-green
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: green
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: role-green
namespace: namespace-green
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testDashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-green
subjects:
- kind: ServiceAccount
name: green
namespace: kubernetes-dashboard
I retrieved the token with the following command:
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep green | awk '{print $1}')
When I login to the Dashboard I see only the default namespace although I have assigned the new namespace to that role.
I am not able to to figure out how to view the resources of the new namespace only and based on the permissions of the role the service account should have limited access.
You dont need to create a new role.
You can just create a RoleBinding to the 'edit' clusterrole with the new service account you have created and it will work as you expect it to. Also the access will be limited to just one namespace - kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testDashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: edit
subjects:
- kind: ServiceAccount
name: green
namespace: kubernetes-dashboard
After that the you can use the same old token to test.

Kubernetes service account to access all the namespaces

I am trying to access all the namespaces and pods from my another pod. So, I have created clusterrole, clusterrolebinding and service account. I am able access the only customer namespace resources. But I need to access all the namespace resources. Is it possible?
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinupcontainers
namespace: customer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
namespace: customer
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinupcontainers
namespace: customer
subjects:
- kind: ServiceAccount
name: spinupcontainers
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: rbac.authorization.k8s.io
Could anyone help to resolve this problem?
Thanks in advance
It seems in your YAML example you are using a RoleBinding as opposed to a ClusterRoleBinding. A RoleBinding only grants those permissions inside of a namespace. See also the Kubernetes Documentation on this topic:
A RoleBinding grants permissions within a specific namespace whereas a
ClusterRoleBinding grants that access cluster-wide.
Most important thing is that you have to connect your service account to your cluster role with proper cluster role binding. Because binding types decide that scope of service account abilities. Under these circumstances, you have to describe cluster role binding as shown below;
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
subjects:
- kind: ServiceAccount
name: spinupcontainers
namespace: customer
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: "rbac.authorization.k8s.io"
If you want to test this within the pod you would describe respective service account for pod like below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "4800"
image: busybox:1.28
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
serviceAccountName: default
status: {}
And then finally you need to ssh to pod and can execute proper curl command with using service account token. Do not forget that you can find the token file in pod by defined service account to pod yaml before (in /var/run/secrets/kubernetes.io/serviceaccount). After that you have to execute API call to use kubernetes API server service (ıf you used kubeadm to create the cluster. It has been already defined in default namespace as named kubernetes). In the below, you can find proper apı call to get default namespace secrets
curl -k -H "Authorization: Bearer $TOKEN" https://<kubernetes-apı-fqdn>/api/v1/namespaces/default/secrets

Receiving error when calling Kubernetes API from the Pod

I have .NET Standard (4.7.2) simple application that is containerized. It has a method to list all namespaces in a cluster. I used csharp kubernetes client to interact with the API. According to official documentation the default credential of API server are created in a pod and used to communicate with API server, but while calling kubernetes API from the pod, getting following error:
Operation returned an invalid status code 'Forbidden'
My deployment yaml is very minimal:
apiVersion: v1
kind: Pod
metadata:
name: cmd-dotnetstdk8stest
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: cmd-dotnetstdk8stest
image: eddyuk/dotnetstdk8stest:1.0.8-cmd
ports:
- containerPort: 80
I think you have RBAC activatet inside your Cluster. You need to assign a ServiceAccount to your pod which containing a Role, that allows this ServerAccount to get a list of all Namespaces. When no ServiceAccount is specified in the Pod-Template, the namespaces default ServiceAccount will be assigned to the pods running in this namespace.
First, you should create the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <YOUR NAMESPACE>
name: namespace-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["namespaces"] # Resource is namespaces
verbs: ["get", "list"] # Allowing this roll to get and list namespaces
Create a new ServiceAccount inside your Namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: application-sa
namespace: <YOUR-NAMESPACE>
Assign your Role created Role to the Service-Account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-namespace-listing
namespace: <YOUR-NAMESPACE>
subjects:
- kind: ServiceAccount
name: application-sa # Your newly created Service-Account
namespace: <YOUR-NAMESPACE>
roleRef:
kind: Role
name: namespace-reader # Your newly created Role
apiGroup: rbac.authorization.k8s.io
Assign the new Role to your Pod by adding a ServiceAccount to your Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: <YOUR-NAMESPACE>
spec:
serviceAccountName: application-sa
You can read more about RBAC in the official docs. Maybe you want to use kubectl-Commands instead of YAML definitions.

Kubernetes UI dashboard

I am trying to configure the Kubernetes UI dashboard. with full admin permissions, so created a YAML file: dashboard-admin.yaml.
contents of my file are below:
apiVersion: rbac.authorization.k8s.io/v1.12.1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
so when I am trying to apply changes to this file by executing the command
kubectl create -f dashboard-admin.yaml
1) I'm encountering with an error as stated below:
error: error parsing dashboard-admin.yaml: error converting YAML to JSON: yaml: line 12: mapping values are not allowed in this context
2) Also, after running the kubectl proxy command, I'm unable to open the dashboard in my local machine using the link below:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Your error is related to YAML indentation. I've edited the question that shows the correct format. Or if you'd like you can use this one too.
apiVersion: rbac.authorization.k8s.io/v1.12.1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Your K8s dashboard will not work unless you have correctly setup the RBAC rule above,