Does Istio recommend using Roles and ClusterRoles? - kubernetes

I am trying to setup my Helm chart to be able to deploy a VirtualService. My deploy user has the Edit ClusterRole bound to it. But I realized that because Istio is not part of the core Kubernetes distro, the Edit ClusterRole does not have permissions to add a VirtualService (or even look at them).
I can, of course, make my own Roles and ClusterRoles if needed. But I figured I would see if Istio has a recommended Role or ClusterRole for that.
But all the docs that I can find for Istio Roles and ClusterRoles are for old versions of Istio.
Does Istio not recommend using Roles and ClusterRoles anymore? If not, what do they recommend? If they do, where are the docs for it?

I ended up using these ClusterRoles. They merge with the standard Kubernetes roles of admin, edit and view. (My edit role only allows access to the VirtualService because that fit my situtation.)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-admin
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-edit
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-view
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]

Related

how to restrict the kubemonkey/chaoskube to get the cluster-wide permissions?

in order to make a high availability test in kubernetes cluster, i use a tool such as chaoskube or kube-monkey , which kills random pods in namespaces to create a "chaos" and to see how the system and applications will react.
by default these tools need a cluster role, in order to let its service account to list/kill pods for all namespaces in cluster.
in my situation i want to install this tool and make the test just in one namespace (namespace x)
is there any way to restrict the permissions of the service account just to give it the permissions to list/kill pods from (namespace x) and the whole cluster ?
i already tried to create a role & rolebinding in (namespace x) but still have the same RBAC error, as the service account expects to have the cluster permissions :
"pods is forbidden: User \"system:serviceaccount:x:chaoskube-sa\" cannot list resource \"pods\" in API group \"\ at the cluster scope"
update: role & rolebinding
this is the default permissions for its service account:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: chaoskube-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: chaoskube-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: chaoskube-role
subjects:
- kind: ServiceAccount
name: chaoskube-sa
namespace: x
with these configration it works fine.
now with restricted permissions for a specific namespace :
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: chaoskube-role
namespace: x
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: chaoskube-rolebinding
namespace: x
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: chaoskube-role
subjects:
- kind: ServiceAccount
name: chaoskube-sa
namespace: x
it can not list the pods , and i receive the RBAC error.

Protect Deployments from manual deletion

We have an Azure-Kubernetes and use a helm chart to manage a list of deployments.
I would like to somehow block manual removal of the deployments. Deletion of the pods is fine, actually wanted to be able to "restart" the services inside to clean up cache and so on.
I'm sorry for the short question, am searching for a while but so far found nothing promising.
You can configure the RBAC into the K8s cluster.
Role for the deployment manager
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager
rules:
- apiGroups: ["*"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Role for developer
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Role binding deployment manager
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
subjects:
- kind: User
name: admin
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""
Role binding developer
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-manager-binding
subjects:
- kind: User
name: dev
apiGroup: ""
roleRef:
kind: Role
name: developer
apiGroup: ""
You can create two new K8s contexts and using that check
kubectl --context=dev-context get pods
You can read more at : https://docs.bitnami.com/tutorials/configure-rbac-in-your-kubernetes-cluster/

Creating a kubernetes dashboard in a restricted cluster where you are forbidden from roles, rolebinding etc and have no access outside the namespace

I have access to only one namespace inside the cluster and that too is restricted.
kind: Role
kind: ClusterRole
kind: RoleBinding
kind: ClusterRoleBinding
are forbidden to me. So im not able to create kubernetes dashboard as per the recommended yaml.
How to get around this?
It's not possible to achieve it unless you ask someone with enough rights to create the objects you can't for you.
Here is a sample manifest used to apply the dashboard to a cluster. As you can see you have to be able to manage Role, ClusterRole, RoleBinding and ClusterRoleBinding to apply it.
So it's impossible to create it with the rights you have as they are essential in this case.
Here is the part affected by lack of your rights:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
```

RBAC For kubernetes Dashboard

I have a User "A". I have namespaces X,Y,Z. I have created a RBAC user role and role binding for user "A" who has access to Namespace "X".
I wanted to give the user "A" access to kubernetes dashboard (which is a role and role binding for Kube-System). But when I give the access for dashboard, user "A" is able to see all the namespaces.
But I want him to see only namespace X which he has access).
How could I go about this?
What's the version of your Dashboard? As far as I know, from 1.7 on, Dashboard has used more secure setup, It means, that by default it has the minimal set of privileges, that are required to make Dashboard work.
Anyway, you can check the privileges of the sa that used by Dashboard, make sure it has the minimal privileges, like this:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
rules:
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create"]
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-dashboard-minimal
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
Then, create RBAC rules to give the full privileges for namespace X to A:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: user-A-admin
namespace: X
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: A
Make sure user A doesn't have any other RBAC rules.

Kubernetes [RBAC]: User with access to specific Pods

I need to give access to a set of pods within a namespace to an external support.
I've been reading about the RBAC API, [Cluster]Roles and [Cluster]Role Bindings; but I could not find anything about how to apply a role to a group of pods (based on annotations or labels).
Does anyone know if it is possible to do that?
This is the Role that I use now, and need limit it to a specific pods set:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: <ClientX>-PodMonitor
namespace: <namespace>
rules:
- apiGroups: [""]
verbs: ["get", "list"]
resources: ["pods", "pods/log"]
If you guys need more details, please let me know.
Thanks.
Try below way of defining role-binding with resource name as example on docs:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]