We are looking to use OPA gatekeeper to audit K8s PodDisruptionBudget (PDB) objects. In particular, we are looking to audit the number of disruptionsAllowed within the status field.
I believe this field will not be available at point of admission since it is calculated and added by the apiserver once the PDB has been applied to the cluster.
It appears that for e.g Pods, the status field is passed as part of the AdmissionReview object [1]. In that particular example it appears that only the pre-admission status fields make it into the AdmissionReview object.
1.) Is it possible to audit on the current in-cluster status fields in the case of PDBs?
2.) Given the intended use of OPA Gatekeeper as an admission controller, would this be considered an anti-pattern?
[1] https://www.openpolicyagent.org/docs/latest/kubernetes-introduction/
This is actually quite reasonable, and is one of the use cases of Audit. You just need to make sure audit is enabled and spec.enforcementAction: dryrun is set in the Constraint.
Here is an example of what the ConstratintTemplate's Rego would look like. OPA Playground.
deny[msg] {
value := input.request.object.status.disruptionsAllowed
value > maxDisruptionsAllowed
msg := sprintf("status.disruptionsAllowed must be <%v> or fewer; found <%v>", [maxDisruptionsAllowed, value])
}
In the specific Constraint, make sure to set enforcementAction to dryrun so the Constraint does not prevent k8s from updating the status field. For example:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedPodDisruptions
metadata:
name: max-disruptions
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["PodDisruptionBudget"]
namespaces:
- "default"
parameters:
maxDisruptionsAllowed:
- 10
If you forget to set enforcementAction, k8s will be unable to update the status field of the PodDisruptionBudget.
Related
I have Validating Webhook that triggers when some CRDs resources get [CREATE, UPDATE] operations.
I wanted to add for that, a specific configmap that will trigger that validating webhook.
Under the same namespace, I have multiple CRDs and configmaps, but I wanted to trigger the webhook also for one of the configmaps.
This is the ValidatingWebhook v1beta1 admissionregistration.k8s.io properties.
I guess the namespaceSelector is not the perfect match for my needs since it triggers for any configmap under that namespace. Tried to understand also if the objectSelector is good solution, but couldnt fully understand.
This is the relevent part of my webhook configurations:
webhooks:
- name: myWebhook.webhook
clientConfig:
***
failurePolicy:
***
rules:
- operations: ['CREATE', 'UPDATE']
apiGroups: ***
apiVersion: ***
resources: [CRD_resource_1, CRD_resource_2]
So I guess that my question is- how can I pick one of the multiple configmaps to triger my validation webhook?
Many thanks.
You definitely should use objectSelector in order to act only on specific configMaps.
You can make sure you put some specific label on those configMaps and configure your webhook:
objectSelector:
matchLabels:
myCoolConfigMaps: true
I’ve been testing istio (1.6) authorization policies and would like to confirm the following:
Can I use k8s service names as shown below where httpbin.bar is the service name for deployment/workload httpbin:
- to:
- operation:
hosts: ["httpbin.bar"]
I have the following rule; only ALLOW access to the httpbin.bar service from service account sleep in foo namespace.
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist-httpbin-bar
namespace: bar
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/foo/sa/sleep"]
- to:
- operation:
hosts: ["httpbin.bar"]
I setup 2 services; httpbin.bar and privatehttpbin.bar. My assumption was that it would block access to privatehttpbin.bar but this is not the case. On a side note, I deliberately avoided adding selector.matchLabels because as far as I can tell the rule should only succeed for httpbin.bar.
The docs state:
A match occurs when at least one source, operation and condition matches the request.
as per here.
I interpreted that AND logic will apply to the source and operation.
Would appreciate if I can find out why this may not be working or if my understanding needs to be corrected.
With your AuthorizationPolicy object, you have two rules in the namespace bar:
Allow any request coming from foo namespace; with service account sleep to any service.
Allow any request to httpbin service; from any namespace, with any service account.
So it is an OR, you are applying.
If you want and AND to be applied; meaning allow any request from the namespace foo with service account sleep to talk to the service httpbin, in the namespace bar, you need to apply the following rule:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: whitelist-httpbin-bar
namespace: bar
rules:
- from:
- source:
principals: ["cluster.local/ns/foo/sa/sleep"]
to: # <- remove the dash (-) from here
- operation:
hosts: ["httpbin.bar"]
On the first point You can specify the host name by k8s service name.Therefore httpbin.bar is acceptable for the host field.
On the second point,
As per here ,
Authorization Policy scope (target) is determined by
“metadata/namespace” and an optional “selector”.
“metadata/namespace” tells which namespace the policy applies. If set
to root namespace, the policy applies to all namespaces in a mesh.
So the authorization policy whitelist-httpbin-bar applies to workloads in the namespace foo.But the services httpbin and privatehttpbin you want to authorize lies in bar namespace.So your authorization policy does not restrict access to these services.
If there are no ALLOW policies for the workload, allow the request.
The above criteria makes the request a valid one.
Hope this helps.
I am trying to create a service for a set of pods based on certain selectors. For example, the below get pods command retrieves the right pods for my requirement -
kubectl get pods --selector property1=dev,property2!=admin
Below is the extract of the service definition yaml where I am attempting to using the same selectors as above -
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
ports:
- name: port1
port: 30303
targetPort: 30303
selector:
property1: dev
<< property2: ???? >>>
I have tried matchExpressions without realizing that service is not among the resources that support set-based filters. It resulted in the following error -
error: error validating "STDIN": error validating data: ValidationError(Service.spec.selector.matchExpressions): invalid type for io.k8s.api.core.v1.ServiceSpec.selector: got "array", expected "string"; if you choose to ignore these errors, turn validation off with --validate=false
I am running upstream Kubernetes 1.12.5
I've did some test but I am afraid it is not possible. As per docs API supports two types of selectors:
Equality-based
Set-based
kubeclt allows to use operators like =,== and !=. So it works when you are using $ kubectl get pods --selector property1=dev,property2!=admin.
Configuration which you want to apply would work in set-based option as it supports in, notin and exists:
environment in (production, qa)
tier notin (frontend, backend)
partition
!partition
Unfortunately set-based is supported only by newer resurces as Job, Deployment, Replica Set and Deamon Set but is not supporting services.
More information about this can be found here.
Even if you will set selector in YAML as:
property2: !value
In service, property2 will be without any value.
Selector: property1=dev,property2=
As additional information , is recognized as AND in services.
As I am not aware how you are managing your cluster, the only thing I can advise is to redefine labels to use only AND as logical operator.
Below mentioned yaml file snippet is used for a role creation :
I am new in kubernetes and so looking for a reference link to elaborate the rules mentioned in the yaml . For example , my understanding is “” indicates core API groups of kubernetes and then my question is what is “extensions” for … Similarly for rest of the yaml looking for a reference / explanation. Thanks a lot guys for the help
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
- apiGroups: ["batch"]
resources:
- jobs
- cronjobs
verbs: ["*"]
It is just a way of grouping k8s objects. As objects get added to k8s, they are added in a specific group.
core API group is v1, so every time you see apiVersion: v1, such as Pod object, it is core API group. replicaSet, Service ConfigMap, Node, secrets, etc. are v1 too. These are the main object k8s works with, that have been almost from the very beginning, and are solid (In my own words).
Objects can be moved from one group to another depending on their maturity. Deployments, for example, are now in apps/v1 group, but they used to be in extensions/v1beta1. I, personally, have yaml files with the old group, that when I try to create, I get an error from the server. I think there was a period, when both apps/v1 and extentions/v1beta1 were valid. Not sure about this though.
K8s is an extensible platform, so you can also create your own objects through CustomResourceDefinition, and put them in a custom groups. This is the case of Ingress Controllers, Meshes, etc. Istio, for example creates bunch of objects, such as Gateway, VirtualService, DestinationRule, etc. Once you create this CDRs, you can get them with a normal kubectl get gateway, for example.
batch/v1 is for Jobs. I think there are no more objects in batch/v1. CronJobs are batch/v1beta1.
HorizontalPodAutoscaler is in autoscaling/v1.
Now, you don't really need to know these objects and their groups by heart. As the other answer says, you can always do kubectl explain OBJECT to know to which groups belongs an object. So a normal workflow for creating RBAC rules would be:
What object do I want to manipulate the access control? -> Say jobs
kubectl explain jobs -> And from here I will get that jobs are
batch/v1
I will create an RBAC rule for batch.
The verbs are self explanatory.
Note that the group is only the first part; batch, extensions, "" (as v1 has nothing), etc.
There is more information about RBAC (like nonResourceURLs, for the paths that exposes api-server), but I think this should be enough to make a picture of how apiGroups work. Hope it helps.
kubectl explain clusterrole.rules will provide the detail explanation.
FIELDS:
apiGroups <[]string> -required-
APIGroups is the name of the APIGroup that contains the resources. If this
field is empty, then both kubernetes and origin API groups are assumed.
That means that if an action is requested against one of the enumerated
resources in either the kubernetes or the origin API group, the request
will be allowed
Extensions is deprecated apiGroup, where unorganized resources used to live , currently, resources are moving to specific group. for instance DaemonSet, Deployment, StatefulSet, and ReplicaSet will migrate to apps group.api-deprecations-in-1-16/
here is the naming convention
The named groups are at REST path /apis/$GROUP_NAME/$VERSION, and use apiVersion: $GROUP_NAME/$VERSION (e.g. apiVersion: batch/v1).
The core group, often referred to as the legacy group, is at the REST path /api/v1 and uses apiVersion: v1.
Full list of supported API groups can be seen in Kubernetes API reference.
Batch is another group in k8s which consist cronjob and job resources.
Verb these are actions such as list, get, etc Verb-on-resources
you can list all of the resources and their group with the following command
kubectl api-resources
Especially considering all the asynchronous procedures involved with creating and updating a deployment, I find it difficult to reliably find the current pods associated with the current version of a given deployment.
Currently, I do:
Add unique labels to the deployment's template.
Get the revision number of the deployment.
Get all replica sets with the labels.
Filter them further to find the one with the correct revision number.
Extract the pod template hash from the replica set.
Get all pods with the labels plus the pod template hash.
This is awkward and complex. Besides, I am not sure that (4) and (6) are guaranteed to yield only the wanted objects. But I cannot filter by ownerReferences, can I?
Is there a more robust and simpler way?
When you create Deployment, it creates ReplicaSet, which creates Pods.
ReplicaSet contains "ownerReferences" path which includes the name and the UID of the parent deployment.
Pods contain the same path with the link to the parent ReplicaSet.
Here is an example of ReplicaSet info:
# kubectl get rs nginx-deployment-569477d6d8 -o yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
...
name: nginx-deployment-569477d6d8
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: Deployment
name: nginx-deployment
uid: acf5fe8a-5d0e-11e8-b14f-42010a8000fc
...