kubernetes custom resource definition required field - kubernetes

I am trying to write a kubernetes crd validation schema. I have an array (vc) of structures and one of the fields in those structures is required (name field).
I tried looking through various examples but it doesn't generate error when name is not there. Any suggestions what is wrong ?
vc:
type: array
items:
type: object
properties:
name:
type: string
address:
type: string
required:
- name

If you are on v1.8, you will need to enable the CustomResourceValidation feature gate for using the validation feature. This can be done by using the following flag on kube-apiserver:
--feature-gates=CustomResourceValidation=true
Here is an example of it working (I tested this on v1.12, but this should work on earlier versions as well):
The CRD:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
version: v1
scope: Namespaced
names:
plural: foos
singular: foo
kind: Foo
validation:
openAPIV3Schema:
properties:
spec:
properties:
vc:
type: array
items:
type: object
properties:
name:
type: string
address:
type: string
required:
- name
The custom resource:
apiVersion: "stable.example.com/v1"
kind: Foo
metadata:
name: new-foo
spec:
vc:
- address: "bar"
Create the CRD.
kubectl create -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/foos.stable.example.com created
Get the CRD and check if the validation field exists in the output. If it doesn't, you probably don't have the feature gate turned on.
kubectl get crd foos.stable.example.com -oyaml
Try to create the custom resource. This should fail with:
kubectl create -f cr-validation.yaml
The Foo "new-foo" is invalid: []: Invalid value: map[string]interface {}{"metadata":map[string]interface {}{"creationTimestamp":"2018-11-18T19:45:23Z", "generation":1, "uid":"7d7f8f0b-eb6a-11e8-b861-54e1ad9de0be", "name":"new-foo", "namespace":"default"}, "spec":map[string]interface {}{"vc":[]interface {}{map[string]interface {}{"address":"bar"}}}, "apiVersion":"stable.example.com/v1", "kind":"Foo"}: validation failure list:
spec.vc.name in body is required

Related

Tekton YAML TriggerTemplate - string substitution

I have this kind of yaml file to define a trigger
`
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: app-template-pr-deploy
spec:
params:
- name: target-branch
- name: commit
- name: actor
- name: pull-request-number
- name: namespace
resourcetemplates:
- apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
generateName: app-pr-$(tt.params.actor)-
labels:
actor: $(tt.params.actor)
spec:
serviceAccountName: myaccount
pipelineRef:
name: app-pr-deploy
podTemplate:
nodeSelector:
location: somelocation
params:
- name: branch
value: $(tt.params.target-branch)
** - name: namespace
value: $(tt.params.target-branch)**
- name: commit
value: $(tt.params.commit)
- name: pull-request-number
value: $(tt.params.pull-request-number)
resources:
- name: app-cluster
resourceRef:
name: app-location-cluster
`
The issue is that sometime target-branch is like "integration/feature" and then the namespace is not valid
I would like to check if there is an unvalid character in the value and replace it if there is.
Any way to do it ?
Didn't find any valuable way to do it beside creating a task to execute this via shell script later in the pipeline.
This is something you could do from your EventListener, using something such as:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: xx
spec:
triggers:
- name: demo
interceptors:
- name: addvar
ref:
name: cel
params:
- name: overlays
value:
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: your-triggerbinding
template:
ref: your-triggertemplate
Then, from your TriggerTemplate, you would add the "branch_name" param, parsed from your EventListener.
Note: payload from git notification may vary. Sample above valid with github. Translating remote/origin/master into master, or abc/def/ghi/jkl into ghi.
I've created a separate task that is doing all the magic I needed and output a valid namespace name into a different variable.
Then instead of use namespace variable, i use valid-namespace all the way thru the pipeline.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: validate-namespace-task-v1
spec:
description: >-
This task will validate namespaces
params:
- name: namespace
type: string
default: undefined
results:
- name: valid-namespace
description: this should be a valid namespace
steps:
- name: triage-validate-namespace
image: some-image:0.0.1
script: |
#!/bin/bash
echo -n "$(params.namespace)" | sed "s/[^[:alnum:]-]/-/g" | tr '[:upper:]' '[:lower:]'| tee $(results.valid-namespace.path)
Thanks

Restrict Ingress/Egress CIDR Ranges – OPA Gatekeeper NetworkPolicy

I am trying to deploy some Restrict Ingress/Egress CIDR Ranges through OPA gatekeeper network policy.
So first I have to create constrainttemplate which will apply any kind of ingress/egress access to any IP or IP CIDR ranges except the ones which are allowed with below yaml file:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdenyegress
spec:
crd:
spec:
names:
kind: K8sDenyEgress
validation:
openAPIV3Schema:
properties:
cidr:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenyegress
violation [{"msg": msg}] {
input.review.object.kind == "NetworkPolicy"
cidr_or_ip := { ip | ip := input.review.object.spec.egress[_].to[_].ipBlock.cidr}
cidr := { ip | ip := input.parameters.cidr[_]}
value := net.cidr_contains(cidr, cidr_or_ip)
not(value)
msg := "The specified IP is not allowed."
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyEgress
metadata:
name: deny-egress
spec:
match:
kinds:
- apiGroups: ["networking.k8s.io"]
kinds: ["NetworkPolicy"]
parameters:
cidr:
- "192.168.0.1/24"
Once after the deployment getting below error:
Target: admission.k8s.gatekeeper.sh
Status:
By Pod:
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-audit-54c9759898-xxdmd
Observed Generation: 1
Operations:
audit
status
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-controller-manager-6bcc7f8fb5-fjbfq
Observed Generation: 1
Operations:
webhook
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-controller-manager-6bcc7f8fb5-gwhrl
Observed Generation: 1
Operations:
webhook
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-controller-manager-6bcc7f8fb5-sc67f
Observed Generation: 1
Operations:
webhook
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Created: true
Events: <none>
Could you please help out with resolving this error?
The function cidr_contains does not accept sets as parameters, see documentation. I used the function cidr_contains_matches instead as follows:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdenyegress
spec:
crd:
spec:
names:
kind: K8sDenyEgress
validation:
openAPIV3Schema:
properties:
cidr:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenyegress
violation [{"msg": msg}] {
input.review.object.kind == "NetworkPolicy"
egress_cidrs := { cidr | cidr := input.review.object.spec.egress[_].to[_].ipBlock.cidr}
whitelist_cidrs := { cidr | cidr := input.parameters.cidr[_]}
matches := net.cidr_contains_matches(whitelist_cidrs, egress_cidrs)
matched := { cidr | cidr := matches[_][1]}
not_matched := egress_cidrs - matched
count(not_matched) > 0
msg := sprintf("The network policy '%s' contains egress cidrs that are not contained in whitelist: %s", [input.review.object.metadata.name, not_matched])
}
violation [{"msg": msg}] {
input.review.object.kind == "NetworkPolicy"
not input.review.object.spec.egress[0].to
msg := sprintf("The network policy '%s' contains an empty egress (allow all), which is not permitted.", [input.review.object.metadata.name])
}
For those wondering how to check for errors:
kubectl describe constrainttemplate.templates.gatekeeper.sh/k8sdenyegress

Patching list in kubernetes manifest with Kustomize

I want to patch (overwrite) list in kubernetes manifest with Kustomize.
I am using patchesStrategicMerge method.
When I patch the parameters which are not in list the patching works as expected - only addressed parameters in patch.yaml are replaced, rest is untouched.
When I patch list the whole list is replaced.
How can I replace only specific items in the list and the res of the items in list stay untouched?
I found these two resources:
https://github.com/kubernetes-sigs/kustomize/issues/581
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md
but wasn't able to make desired solution of it.
exmaple code:
orig-file.yaml
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: test
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: test-user
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
patch.yaml:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
test: brase-yourself
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patchesStrategicMerge:
- patch.yaml
What I get:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
test: brase-yourself
What I want:
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
name: alertmanager-slack-config
namespace: system-namespace
spec:
other: other-stuff
receivers:
- name: default
slackConfigs:
- name: slack
username: Karl
channel: "#alerts"
sendResolved: true
apiURL:
name: slack-webhook-url
key: address
test: brase-yourself
What you can do is to use jsonpatch instead of patchesStrategicMerge, so in your case:
cat <<EOF >./kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- orig-file.yaml
patches:
- path: patch.yaml
target:
group: monitoring.coreos.com
version: v1alpha1
kind: AlertmanagerConfig
name: alertmanager-slack-config
EOF
patch:
cat <<EOF >./patch.yaml
- op: replace
path: /spec/receivers/0/slackConfigs/0/username
value: Karl
EOF

CustomResource Spec value returning null

Hi I have created following CustomResourceDefinition - crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: test.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpod
singular: testpod
kind: testpod
The corresponding resource is as below - cr.yaml
kind: testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
When i use client-go program to fetch the spec value of cr object 'testpodcr' The value comes as null.
func (c *TestConfigclient) AddNewPodForCR(obj *TestPodConfig) *v1.Pod {
log.Println("logging obj \n", obj.Name) // Prints the name as testpodcr
log.Println("Spec value: \n", obj.Spec) //Prints null
dep := &v1.Pod{
ObjectMeta: meta_v1.ObjectMeta{
//Labels: labels,
GenerateName: "test-pod-",
},
Spec: obj.Spec,
}
return dep
}
Can anyone please help in figuring this out why the spec value is resulting to null
There is an error with Your crd.yaml file. I am getting the following error:
$ kubectl apply -f crd.yaml
The CustomResourceDefinition "test.demo.k8s.com" is invalid: metadata.name: Invalid value: "test.demo.k8s.com": must be spec.names.plural+"."+spec.group
In Your configuration the name: test.demo.k8s.com does not match plurar: testpod found in spec.names.
I modified Your crd.yaml and now it works:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testpods.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpods
singular: testpod
kind: Testpod
$ kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/testpods.demo.k8s.com created
After that Your cr.yaml also had to be fixed:
apiVersion: "demo.k8s.com/v1"
kind: Testpod
metadata:
name: testpodcr
namespace: testns
spec:
containers:
- name: testname
image: test/img:v5.16
env:
- name: TESTING_ON
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: testvol
mountPath: "/test/vol"
readOnly: true
After that I created namespace testns and finally created Testpod object successfully:
$ kubectl create namespace testns
namespace/testns created
$ kubectl apply -f cr.yaml
testpod.demo.k8s.com/testpodcr created

Configure dashboard via values

As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?
I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that