How to run kubectl within a job in a namespace? - kubernetes

Hi I saw this documentation where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"

Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities
Security Considerations:
Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.
Test Environment:
I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
In order to avoid incompatibility, use the kubectl with matching version with your server.
Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount.
Let me know if you have further questions about this case.

Create service account like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
Create ClusterRoleBinding using this.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: modify-pods-to-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: internal-kubectl
Now create pod with same config that are given at Documentation.

When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to
create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.
After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: internal-kubectl

Related

How can a k8s namespace admin use top?

We have a shared tenant cluster, and we want our developers to be able to run kubectl top pods --namespace dev-namespace
But it seems to me that for top to be usable, you need to be able to run kubectl get nodes. But nodes are not namespaced.
Is there a solution?
We the namespace admin setup like this:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: username#domain
And as a cluster admin I can run the top command, so metrics-server seems to be working fine.
Kubernetes has API group metrics.k8s.io, that you can use to give read permission for kubectl top pods -n <namespace>. If you grant get and list permissions for pods, you can run the command.
I tested the configuration below in a GKE cluster running Kubernetes 1.21 with kubectl top pod --as=system:serviceaccount:monitoring:test-account -n monitoring. With these permissions, I can only run kubectl top pod in the monitoring namespace, other commands will fail.
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-account
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: monitoring
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: monitoring
subjects:
- kind: ServiceAccount
name: test-account
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io

Setting up a Kubernetes Cronjob: Cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"

I've tried to setup a cron job to patch a horizontal pod autoscaler, but the job returns horizontalpodautoscalers.autoscaling "my-web-service" is forbidden: User "system:serviceaccount:staging:sa-cron-runner" cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"
I've tried setting up all the role permissions as below:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: staging
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: staging
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: staging
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: staging
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scale-up-job
namespace: staging
spec:
schedule: "16 20 * * 1-6"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa my-web-service --patch '{\"spec\":{\"minReplicas\":6}}'
restartPolicy: OnFailure
kubectl auth can-i patch horizontalpodautoscalers -n staging --as sa-cron-runner also returns no, so the permissions can't be setup correctly.
How can I debug this? I can't see how this is setup incorrectly
I think that the problem is that the cronjob needs to first get the resource and then Patch the same. So, you need to explicitly specify the permission to get in the Role spec.
The error also mentions authentication problem with getting the resource:
horizontalpodautoscalers.autoscaling "my-web-service" is forbidden: User "system:serviceaccount:staging:sa-cron-runner" cannot get resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "staging"
Try modifying the verbs part of the Role as:
verbs: ["patch", "get"]
You could also include list and watch depending on the requirements in the scripts that is run inside the cronjob.

Kubernetes service account to access all the namespaces

I am trying to access all the namespaces and pods from my another pod. So, I have created clusterrole, clusterrolebinding and service account. I am able access the only customer namespace resources. But I need to access all the namespace resources. Is it possible?
apiVersion: v1
kind: ServiceAccount
metadata:
name: spinupcontainers
namespace: customer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
namespace: customer
rules:
- apiGroups: [""]
resources: ["pods", "pods/exec"]
verbs: ["get", "list", "delete", "patch", "create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: spinupcontainers
namespace: customer
subjects:
- kind: ServiceAccount
name: spinupcontainers
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: rbac.authorization.k8s.io
Could anyone help to resolve this problem?
Thanks in advance
It seems in your YAML example you are using a RoleBinding as opposed to a ClusterRoleBinding. A RoleBinding only grants those permissions inside of a namespace. See also the Kubernetes Documentation on this topic:
A RoleBinding grants permissions within a specific namespace whereas a
ClusterRoleBinding grants that access cluster-wide.
Most important thing is that you have to connect your service account to your cluster role with proper cluster role binding. Because binding types decide that scope of service account abilities. Under these circumstances, you have to describe cluster role binding as shown below;
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spinupcontainers
subjects:
- kind: ServiceAccount
name: spinupcontainers
namespace: customer
roleRef:
kind: ClusterRole
name: spinupcontainers
apiGroup: "rbac.authorization.k8s.io"
If you want to test this within the pod you would describe respective service account for pod like below:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
containers:
- args:
- sleep
- "4800"
image: busybox:1.28
name: busybox
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
serviceAccountName: default
status: {}
And then finally you need to ssh to pod and can execute proper curl command with using service account token. Do not forget that you can find the token file in pod by defined service account to pod yaml before (in /var/run/secrets/kubernetes.io/serviceaccount). After that you have to execute API call to use kubernetes API server service (ıf you used kubeadm to create the cluster. It has been already defined in default namespace as named kubernetes). In the below, you can find proper apı call to get default namespace secrets
curl -k -H "Authorization: Bearer $TOKEN" https://<kubernetes-apı-fqdn>/api/v1/namespaces/default/secrets

How to create a service account to get a list of pods from inside a Kubernetes cluster?

I have created a service account to get a list of pods in minikube.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo-sa
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: list-pods
namespace: default
rules:
- apiGroups:
- ''
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: list-pods_demo-sa
namespace: default
roleRef:
kind: Role
name: list-pods
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: demo-sa
namespace: default
The problem is, that I get an error message if I use the service account to get the list of pods. kubectl auth can-i list pod --as demo-sa answers always with no.
You cannot use:
kubectl auth can-i list pod --as <something>
to impersonate ServiceAccounts. You can only impersonate users --as and impersonate groups --as-group
A workaround is to use the service account token.
kubectl get secret demo-sa-token-7fx44 -o=jsonpath='{.data.token}' | base64 -d
You can use the output here and any kubectl request. However, I checked with kubectl auth can-i list pod and I don't think auth works with a token (you always get a yes)

Receiving error when calling Kubernetes API from the Pod

I have .NET Standard (4.7.2) simple application that is containerized. It has a method to list all namespaces in a cluster. I used csharp kubernetes client to interact with the API. According to official documentation the default credential of API server are created in a pod and used to communicate with API server, but while calling kubernetes API from the pod, getting following error:
Operation returned an invalid status code 'Forbidden'
My deployment yaml is very minimal:
apiVersion: v1
kind: Pod
metadata:
name: cmd-dotnetstdk8stest
spec:
nodeSelector:
kubernetes.io/os: windows
containers:
- name: cmd-dotnetstdk8stest
image: eddyuk/dotnetstdk8stest:1.0.8-cmd
ports:
- containerPort: 80
I think you have RBAC activatet inside your Cluster. You need to assign a ServiceAccount to your pod which containing a Role, that allows this ServerAccount to get a list of all Namespaces. When no ServiceAccount is specified in the Pod-Template, the namespaces default ServiceAccount will be assigned to the pods running in this namespace.
First, you should create the Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <YOUR NAMESPACE>
name: namespace-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["namespaces"] # Resource is namespaces
verbs: ["get", "list"] # Allowing this roll to get and list namespaces
Create a new ServiceAccount inside your Namespace
apiVersion: v1
kind: ServiceAccount
metadata:
name: application-sa
namespace: <YOUR-NAMESPACE>
Assign your Role created Role to the Service-Account:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: allow-namespace-listing
namespace: <YOUR-NAMESPACE>
subjects:
- kind: ServiceAccount
name: application-sa # Your newly created Service-Account
namespace: <YOUR-NAMESPACE>
roleRef:
kind: Role
name: namespace-reader # Your newly created Role
apiGroup: rbac.authorization.k8s.io
Assign the new Role to your Pod by adding a ServiceAccount to your Pod Spec:
apiVersion: v1
kind: Pod
metadata:
name: podname
namespace: <YOUR-NAMESPACE>
spec:
serviceAccountName: application-sa
You can read more about RBAC in the official docs. Maybe you want to use kubectl-Commands instead of YAML definitions.