I want to use OPA(Open Policy Agent) in kubernetes but have some questions which are not still clear for me:
Let’s take a look at a specific case together: for instance, there is a pod creation in a namespace and we can know the namespace from the pod object at OPA. But, can we get the namespace object separately to learn the authority which this namespace belongs to?
More explicitly, I mean can we do requests through the Kubernetes cluster by OPA?
for instance, there is a pod creation with the name of Test. I just want to allow this creation for only an authority called TestAuthority. When the pod is created, we know the namespace data but not the authority. To figure out the authority which this pod belongs to, I need to have the namespace object and look out its labels. Can we do so by OPA?
Additionally, can we say allow pod creation with the names of Test1, Test2, and Test3? So, any pod creation with the name of Test4 should be denied.
Thank you in advance for your help
(1) Yes, see https://github.com/open-policy-agent/kube-mgmt#caching or https://github.com/open-policy-agent/gatekeeper#replicating-data depending on which integration you are wanting to use. Both allow replicating objects from kubernetes into OPA to reference in the policies.
(2) Yes, you can write policies like that.
credit:Patrick East answered from OPA Slack
Related
I have an svc running eg my-svc-1,
and I run a deployment that makes an svc of the same name my-svc-1. What would happen?
Service should be unique within a namespace but not across a namespaces. The uniqueness applies to all namespace-based scoping objects for example Deployment, service, secrets etc.
Namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces.
If you really need the same service with the different software version, you can create another namespace named B and then you can create a service having name my-svc-1.
working-with-objects-namespaces
If I have a microservice app within a namespace, I can easily get all of my namespaced resources within that namespace using the k8s api. I cannot, however, view what non-namespaced resources are being used by the microservice app. If I want to see my non-namespaced resources, I can only see them all at once, with no indication of which ones are dependancies in the microservice app.
How can I find my dependancies related to my application? I'd like to be able to get reference to things like PersistentVolumes, StorageClasses, ClusterRoles, etc. that are being used by the app's namespaced resources.
Your code, running in a pod container inside a namespace, runs using a serviceaccount set using pod.spec.serviceAccountName.
If not set, it'll run using the default serviceaccount.
You need to create a clusterRole in order to grant access to cluster-wide resources specific verbs, then in the pod namespace assign this clusterRole to the serviceaccount, via a roleBinding targetting the clusterRole create before.
Then your pod, using a kubernetes client, and using the "in-cluster config" auth method, will be able to query the apiserver to get/list/watch/delete/patch... the said cluster-wide resources.
This is a definitely a non-trivial task because of the many ways such dependency can come into play: whenever an object "uses" another one, there we could identify a dependency. The issue is that this "use" relation can take many forms: e.g., a Pod can reference a Volume in its definition (which would be a sort of direct dependency), but can also use a PersistentVolumeClaim which would then instantiate a PV through use of a StorageClass -- and these relations are only known to Kubernetes at run time, when the YAML definitions are applied.
In other words:
To chase dependencies, you would have to inspect the YAML description of resources in-use, knowing the semantics of each: there's no single depends: value in each but one would need to follow e.g., the spec.storageClass of a PVC, the spec.volumes: of a Pod, etc.
In some cases, this would not even be enough: e.g., for matching Services and Pods this would not even be enough, as one would have to match ports on each side.
All of this would need to be done by extracting YAML from a running K8s cluster, since some relations between resources would not be known until they are instantiated.
You could check How do you visualise dependencies in your Kubernetes YAML files? article by Daniele Polencic shows a few tools that can be used to visualize dependencies:
There isn't any static tool that analyses YAML files. But you can visualise your dependencies in the cluster with Weave Scope, KubeView or tracing the traffic with Istio.
How to prevent a user to spawn pods in namespace with serviceaccounts that have high privileges but allow them to create namespace ?
For example, I have a cluster with velero in a velero namespace. I want to prevent the user to create pods with the veleroe serviceaccount to prevent the user to create privileged accounts. But I want that the user can create namespace and use serviceaccount with restritected PSP.
In my opinion the idiomatic way of enforcing this in Kubernetes is by creating a dynamic validating admission controller.
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#validatingadmissionwebhook
I know it could sound a bit complex, but trust me, it's really simple. Eventually, an admission control is simply a webhook endpoint (a piece of code) which can change and/or enforce a certain state on created objects.
So in your case: create a dynamic validating webhook and simply disallow creation of pods that does not match your restrictions, with a corresponding relevant error message.
First of all the service account used by Valero is in the Valero namespace. So if the user don't have RBAC to do anything in Valero namespace it will not be able to use the service account used by Valero. You should define RBAC for users such a way that they only can do CRUD on resources in the intended namespaces and can not do CRUD on resources in other namespaces. When I say resources it also includes service account.
We are going to provide customers a function by deploying and running a container in customers kubernetes environment. After the job is done, we will clean up the container. Currently, the plan is to use k8s default namespace, but I'm not sure whether it can be a concern for customers. I don't have much experience in k8s related field. Should we give customers' an option to specify a namespace to run container, or just use the default namespace? I appreciate your suggestions!
I would recommend you not use (!?) the default namespace for anything ever.
The following is more visceral than objective but it's drawn from many years' experience of Kubernetes. In 2016, a now former colleague and I blogged about the use of namespaces:
https://kubernetes.io/blog/2016/08/kubernetes-namespaces-use-cases-insights/
NB since then, RBAC was added and it permits enforcing separation, securely.
Although it exists as a named (default) namespace, it behaves as if there is (the cluster has) no namespace. It may be (!?) that it was retcon'd into Kubernetes after namespaces were added
Unless your context is defined to be a specific other namespace, kubectl ... behaves as kubectl ... --namespace=default. So, by accident it's easy to pollute and be impacted by pollution in this namespace. I'm sure your team will use code for your infrastructure but mistakes happen and "I forgot to specify the namespace" is easily done (and rarely wanted).
Using non-default namespaces becomes very intentional, explicit and, I think, precise. You must, for example (per #david-maze answer) be more intentional about RBAC for the namespace's resources.
Using namespaces is a mechanism that promotes multi-tenancy which is desired for separation of customers (business units, versions etc.)
You can't delete the default namespace but you can delete (and by consequence delete all the resources constrained by) any non-default namespace.
I'll think of more, I'm sure!
Update
Corollary: generally don't constrain resources to namespace in specs but use e.g. kubectl apply --filename=x.yaml --namespace=${NAMESPACE}
I'd consider the namespace name pretty much a required option. I would default to the namespace name specified in the .kube/config file, if that's at all a choice for you. (That may not be default.)
RBAC rules or organizational policies also might mean the default namespace can't or shouldn't be used. One of the clusters I work with is a shared cluster where each user has their own namespace, enforced by RBAC policies; except for cluster admins, nobody gets to use default, and everybody needs to be able to configure the namespace to run in their own.
I have an application namespace with 30 services. Most are stateless Deployments, mixed with some StatefulSets etc. Fairly standard stuff that is.
I need to grant a special user a Role that can only exec into certain Pod. Currently RBAC grants the exec right to all pods in the namespace, but I need to tighten it down.
The problem is Pod(s) are created by a Deployment configurator, and the Pod name(s) are thus "generated", configurator-xxxxx-yyyyyy. Since you cannot use glob (ie. configurator-*), and Role cannot grant exec for Deployments directly.
So far I've thought about:
Converting Deployment into StatefulSet or a plain Pod, so Pod would have a known non-generated name, and glob wouldn't be needed
Moving the Deployment into separate namespace, so the global exec right is not a problem
Both of these work, but neither is optimal. Is there a way to write a proper Role for this?
RBAC, as it is meant by now, doesn't allow to filter resources by other attributes than namespace and resource name. The discussion is open here.
Thus, namespaces are the smallest piece at authorizing access to pods. Services must be separated in namespaces thinking in what users could need access to them.
The optimal solution right now is to move this deployment to another namespace since it needs different access rules than other deployments in the original namespace.