I have .NET Standard (4.7.2) simple application that is containerized. It has a method to list all namespaces in a cluster. I used csharp kubernetes client to interact with the API. According to official documentation the default credential of API server are created in a pod and used to communicate with API server, but while calling kubernetes API from the pod, getting following error:
Operation returned an invalid status code 'Forbidden'
My deployment yaml is very minimal:
apiVersion: v1
kind: Pod
metadata:
name: cmd-dotnetstdk8stest
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: cmd-dotnetstdk8stest
image: eddyuk/dotnetstdk8stest:1.0.8-cmd
ports:
- containerPort: 80
I think you have RBAC activatet inside your Cluster. You need to assign a ServiceAccount to your pod which containing a Role, that allows this ServerAccount to get a list of all Namespaces. When no ServiceAccount is specified in the Pod-Template, the namespaces default ServiceAccount will be assigned to the pods running in this namespace.
First, you should create the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <YOUR NAMESPACE>
name: namespace-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["namespaces"] # Resource is namespaces
verbs: ["get", "list"] # Allowing this roll to get and list namespaces
Create a new ServiceAccount inside your Namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: application-sa
namespace: <YOUR-NAMESPACE>
Assign your Role created Role to the Service-Account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-namespace-listing
namespace: <YOUR-NAMESPACE>
subjects:
- kind: ServiceAccount
name: application-sa # Your newly created Service-Account
namespace: <YOUR-NAMESPACE>
roleRef:
kind: Role
name: namespace-reader # Your newly created Role
apiGroup: rbac.authorization.k8s.io
Assign the new Role to your Pod by adding a ServiceAccount to your Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: <YOUR-NAMESPACE>
spec:
serviceAccountName: application-sa
You can read more about RBAC in the official docs. Maybe you want to use kubectl-Commands instead of YAML definitions.
Related
I have a namespace namespace:development in my K8s cluster. I wanted to deploy Fluentd following:
fluentd-daemonset-elasticsearch-rbac.yaml
I ONLY changed:
Type of role from ClusterRole to Role (the rules parts is the same)
Name of the ServiceAccount
Instead of namespace: kube-system I changed it to namespace: development in ServiceAccount, Role and RoleBinding
ServiceAccount in RoleBinding to my own service account
When I deployed I got the following error:
start_pod_watch: Exception encountered setting up pod watch from Kubernetes API v1 endpoint https://<ip>:443/api: pods is forbidden: User "system:serviceaccount:development:my-svc-account" cannot list resource "pods" in API group "" at the cluster scope ({"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods is forbidden: User \\"system:serviceaccount:development:my-svc-account\\" cannot list resource \\"pods\\" in API group \\"\\" at the cluster scope","reason":"Forbidden","details":{"kind":"pods"},"code":403} (Fluent::ConfigError)
My question: Is this mandatory to have a clusterRole to deploy Fluentd in a cluster?
If you have change the Clusterrole to role you also have to update the bindings.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: development
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: Role
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: development
---
I've tried to setup a cron job to patch a horizontal pod autoscaler, but the job returns horizontalpodautoscalers.autoscaling "my-web-service" is forbidden: User "system:serviceaccount:staging:sa-cron-runner" cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"
I've tried setting up all the role permissions as below:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: staging
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: staging
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: staging
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: staging
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scale-up-job
namespace: staging
spec:
schedule: "16 20 * * 1-6"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa my-web-service --patch '{\"spec\":{\"minReplicas\":6}}'
restartPolicy: OnFailure
kubectl auth can-i patch horizontalpodautoscalers -n staging --as sa-cron-runner also returns no, so the permissions can't be setup correctly.
How can I debug this? I can't see how this is setup incorrectly
I think that the problem is that the cronjob needs to first get the resource and then Patch the same. So, you need to explicitly specify the permission to get in the Role spec.
The error also mentions authentication problem with getting the resource:
horizontalpodautoscalers.autoscaling "my-web-service" is forbidden: User "system:serviceaccount:staging:sa-cron-runner" cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"
Try modifying the verbs part of the Role as:
verbs: ["patch", "get"]
You could also include list and watch depending on the requirements in the scripts that is run inside the cronjob.
I am trying to access all the namespaces and pods from my another pod. So, I have created clusterrole, clusterrolebinding and service account. I am able access the only customer namespace resources. But I need to access all the namespace resources. Is it possible?
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinupcontainers
namespace: customer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
namespace: customer
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinupcontainers
namespace: customer
subjects:
- kind: ServiceAccount
name: spinupcontainers
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: rbac.authorization.k8s.io
Could anyone help to resolve this problem?
Thanks in advance
It seems in your YAML example you are using a RoleBinding as opposed to a ClusterRoleBinding. A RoleBinding only grants those permissions inside of a namespace. See also the Kubernetes Documentation on this topic:
A RoleBinding grants permissions within a specific namespace whereas a
ClusterRoleBinding grants that access cluster-wide.
Most important thing is that you have to connect your service account to your cluster role with proper cluster role binding. Because binding types decide that scope of service account abilities. Under these circumstances, you have to describe cluster role binding as shown below;
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
subjects:
- kind: ServiceAccount
name: spinupcontainers
namespace: customer
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: "rbac.authorization.k8s.io"
If you want to test this within the pod you would describe respective service account for pod like below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "4800"
image: busybox:1.28
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
serviceAccountName: default
status: {}
And then finally you need to ssh to pod and can execute proper curl command with using service account token. Do not forget that you can find the token file in pod by defined service account to pod yaml before (in /var/run/secrets/kubernetes.io/serviceaccount). After that you have to execute API call to use kubernetes API server service (ıf you used kubeadm to create the cluster. It has been already defined in default namespace as named kubernetes). In the below, you can find proper apı call to get default namespace secrets
curl -k -H "Authorization: Bearer $TOKEN" https://<kubernetes-apı-fqdn>/api/v1/namespaces/default/secrets
I want to have a service account that can create a deployment. So I am creating a service account, then a role and then a rolebinding. The yaml files are below:
ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: testsa
namespace: default
Role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrole
namespace: default
rules:
- apiGroups:
- ""
- batch
- apps
resources:
- jobs
- pods
- deployments
- deployments/scale
- replicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- scale
RoleBinding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: testrolebinding
namespace: default
subjects:
- kind: ServiceAccount
name: testsa
namespace: default
roleRef:
kind: Role
name: testrole
apiGroup: rbac.authorization.k8s.io
But after applying these files, when I do the following command to check if the service account can create a deployment, it answers no.
kubectl auth can-i --as=system:serviceaccount:default:testsa create deployment
The exact answer is:
no - no RBAC policy matched
It works fine when I do checks for Pods.
What am I doing wrong?
My kubernetes versions are as follows:
kubectl version --short
Client Version: v1.16.1
Server Version: v1.12.10-gke.17
Since you're using a 1.12 cluster, you should include the extensions API group in the Role for the deployments resource.
This was deprecated in Kubernetes 1.16 in favor of the apps group: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
Whats the best approach to provide a .kube/config file in a rest service deployed on kubernetes?
This will enable my service to (for example) use the kuberntes client api.
R
Create service account:
kubectl create serviceaccount example-sa
Create a role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Create role binding:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: example-role-binding
namespace: default
subjects:
- kind: "ServiceAccount"
name: example-sa
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
create pod using example-sa
kind: Pod
apiVersion: v1
metadata:
name: example-pod
spec:
serviceAccountName: example-sa
containers:
- name: secret-access-container
image: example-image
The most important line in pod definition is serviceAccountName: example-sa. After creating service account and adding this line to your pod's definition you will be able to access your api access token at /var/run/secrets/kubernetes.io/serviceaccount/token.
Here you can find a little bit more detailed version of the above example.