So like you said I created a another pod which is of kind:job and included the script.sh.
In the script.sh file, I run "kubectl exec" to the main pod to run few commands
The script gets executed, but I get the error "cannot create resource "pods/exec in API group"
So I created a clusterrole with resources: ["pods/exec"] and bind it to the default service account using ClusterRoleBinding
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: service-account-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
In the pod which is of kind:job, I include the service account like shown below
restartPolicy: Never
serviceAccountName: default
but I still get the same error. What am I doing wrong here ?
Error from server (Forbidden): pods "mongo-0" is forbidden: User "system:serviceaccount:default:default" cannot create resource "pods/exec" in API group "" in the namespace "default"
If this is something that needs to be regularly run for maintenance look into Kubernetes daemon set object.
Related
How do you disable shell or bash access to pods in a container? I do not want anyone to get access inside the pod via kubectl exec or docker exec or via k9s
Kubectl is a CLI tool so it connects with the K8s API server and authenticates.
You can restrict the user by their Role, so using the RBAC with proper permission will resolve your issue.
Ref : https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Example :
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: serviceaccount
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-role
namespace: default
rules:
- apiGroups: [""]
resources:
- pods/attach
- pods/exec
verbs: [""]
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: default-user-view
namespace: default
subjects:
- kind: ServiceAccount
name: serviceaccount
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: default-user-role
check auth using the
kubectl auth can-i --as=system:serviceaccount:default:serviceaccount exec pod
minikube start
--extra-config=apiserver.enable-admission-plugins=PodSecurityPolicy
--addons=pod-security-policy
we have a default namespace in which the nginx service account does not have the rights to launch the nginx container
when creating a pod, use the command
kubectl run nginx --image=nginx -n default --as system:serviceaccount:default:nginx-sa
as a result, we get an error
Error: container has runAsNonRoot and image will run as root (pod: "nginx_default(49e939b0-d238-4e04-a122-43f4cfabea22)", container: nginx)
as I understand it, it is necessary to write a psp policy that will allow the nginx-sa service account to run under, but I do not understand how to write it correctly for a specific service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-sa
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role
namespace: default
rules:
- apiGroups: ["extensions", "apps",""]
resources: [ "deployments","pods" ]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nginx-sa-role-binding
namespace: default
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
roleRef:
kind: Role
name: nginx-sa-role
apiGroup: rbac.authorization.k8s.io
...but I do not understand how to write it correctly for a specific service account
After you get your special psp ready for your nginx, you can grant your nginx-sa to use the special psp like this:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: role-to-use-special-psp
rules:
- apiGroups:
- policy
resourceNames:
- special-psp-for-nginx
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: bind-to-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: role-to-use-special-psp
subjects:
- kind: ServiceAccount
name: nginx-sa
namespace: default
I am seeing this error for user jenkins when deploying.
Error: pods is forbidden: User "system:serviceaccount:ci:jenkins" cannot list pods in the namespace "kube-system"
I have created a definition for service account
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
I have created a ClusterRoleBinding
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: jenkins
namespace: kube-system
Any advice?
Without a namespace (in the ServiceAccount creation) you will automatically create it in the default namespace. The same with a Role (which is always a one-namespace resource).
What you need to do is to create a ClusterRole with the correct permissions (basically just change your role into a ClusterRole) and then set the correct namespace either on the ServiceAccount resource or in the ClusterRole binding.
You can also skip creating the Role and RoleBinding, as the ClusterRole and ClusterRoleBinding will override it either way.
--
With that said. It's always good practice to create a specific ServiceAccount and RoleBinding per namespace when it comes to deploys, so that you don't accidently create an admin account which is used in a remote CI tool like... Jenkins ;)
I'm trying to create RBAC Role / rules for a service that needs a persistent volume and it's still failing with forbidden error.
Here is my role config:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-full-access
namespace: logdrop
rules:
- apiGroups: ["", "extensions", "apps", "autoscaling"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
And this is my cut down PersistentVolume manifest:
apiVersion: v1
kind: PersistentVolume
metadata:
name: logdrop-pv
namespace: logdrop
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: logdrop
name: logdrop-pvc
hostPath:
path: /efs/logdrop/logdrop-pv
When I try to apply it I get a forbidden error.
$ kubectl --kubeconfig ~/logdrop/kubeconfig-logdrop.yml apply -f pv-test.yml
Error from server (Forbidden): error when retrieving current configuration of:
Resource: "/v1, Resource=persistentvolumes", GroupVersionKind: "/v1, Kind=PersistentVolume"
Name: "logdrop-pv", Namespace: ""
Object: &{map["apiVersion":"v1" "kind":"PersistentVolume" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "name":"logdrop-pv"] "spec":map["accessModes":["ReadWriteMany"] "capacity":map["storage":"10Gi"] "claimRef":map["name":"logdrop-pvc" "namespace":"logdrop"] "hostPath":map["path":"/efs/logdrop/logdrop-pv"] "persistentVolumeReclaimPolicy":"Retain"]]}
from server for: "pv-test.yml": persistentvolumes "logdrop-pv" is forbidden: User "system:serviceaccount:logdrop:logdrop-user" cannot get resource "persistentvolumes" in API group "" at the cluster scope
On the last line it specifically says resource "persistentvolumes" in API group "" - that's what I have allowed in the rules!
I can create the PV with admin credentials from the same yaml file and I can create any other resources (pods, services, etc) with the logdrop permissions. Just the PersistentVolume doesn't work for some reason. Any idea why?
I'm using Kubernetes 1.15.0.
Update:
This is my role binding as requested:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: logdrop-user-view
namespace: logdrop
subjects:
- kind: ServiceAccount
name: logdrop-user
namespace: logdrop
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: logdrop-user-full-access
It's not a ClusterRoleBinding as my intention is to give the user access only to one namespace (logdrop), not to all namespaces across the cluster.
PVs, namespaces, nodes and storages are cluster-scoped objects. As a best practice, to be able to list/watch those objects, you need to create ClusterRole and bind them to a ServiceAccount via ClusterRoleBinding. As an example;
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: <name of your cluster role>
rules:
- apiGroups: [""]
resources:
- nodes
- persistentvolumes
- namespaces
verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources:
- storageclasses
verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: <name of your cluster role binding>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: <name of your cluster role which should be matched with the previous one>
subjects:
- kind: ServiceAccount
name: <service account name>
I see a potential problem here.
PersistentVolumes are cluster scoped resources. They are expected to be provisioned by the administrator without any namespace.
PersistentVolumeClaims however, can be created by users within a particular namespace as they are a namespaced resources.
That's why when you use admin credentials it works but with logdrop it returns an error.
Please let me know if that makes sense.
The new role needs to be granted to a user, or group of users, with a rolebinding, e.g.:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: logdrop-rolebinding
namespace: logdrop
subjects:
- kind: User
name: logdrop-user
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: logdrop-user-full-access
apiGroup: rbac.authorization.k8s.io
I'm trying to create a read only user. I want the user to be able to list nodes and pods and view the dashboard. I got the certs created and can connect but I'm getting the following error.
$ kubectl --context minikube-ro get pods --all-namespaces
Error from server (Forbidden): pods is forbidden: User "erst-operation" cannot list pods at the cluster scope
My cluster role...
$ cat helm/namespace-core/templates/pod-reader-cluster-role.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list", "watch"]
My cluster role binding...
$ cat helm/namespace-core/templates/pod-reader-role-binding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: erst-operation
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
I'm aware the above shouldn't grant permissions to see the dashboard but how do I get it to just list the pods?
You cluster role should contain Core group as resource pods are in Core group.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: '*'
name: pod-reader
rules:
- apiGroups: ["extensions", "apps", ""]
resources: ["pods"]
verbs: ["get", "list", "watch"]