I am trying to deploy some Restrict Ingress/Egress CIDR Ranges through OPA gatekeeper network policy.
So first I have to create constrainttemplate which will apply any kind of ingress/egress access to any IP or IP CIDR ranges except the ones which are allowed with below yaml file:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdenyegress
spec:
crd:
spec:
names:
kind: K8sDenyEgress
validation:
openAPIV3Schema:
properties:
cidr:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenyegress
violation [{"msg": msg}] {
input.review.object.kind == "NetworkPolicy"
cidr_or_ip := { ip | ip := input.review.object.spec.egress[_].to[_].ipBlock.cidr}
cidr := { ip | ip := input.parameters.cidr[_]}
value := net.cidr_contains(cidr, cidr_or_ip)
not(value)
msg := "The specified IP is not allowed."
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyEgress
metadata:
name: deny-egress
spec:
match:
kinds:
- apiGroups: ["networking.k8s.io"]
kinds: ["NetworkPolicy"]
parameters:
cidr:
- "192.168.0.1/24"
Once after the deployment getting below error:
Target: admission.k8s.gatekeeper.sh
Status:
By Pod:
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-audit-54c9759898-xxdmd
Observed Generation: 1
Operations:
audit
status
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-controller-manager-6bcc7f8fb5-fjbfq
Observed Generation: 1
Operations:
webhook
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-controller-manager-6bcc7f8fb5-gwhrl
Observed Generation: 1
Operations:
webhook
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Errors:
Code: ingest_error
Message: Could not ingest Rego: 1 error occurred: __modset_templates["admission.k8s.gatekeeper.sh"]["K8sDenyEgress"]_idx_0:7: rego_type_error: net.cidr_contains: invalid argument(s)
have: (set[any], set[any], ???)
want: (string, string, boolean)
Id: gatekeeper-controller-manager-6bcc7f8fb5-sc67f
Observed Generation: 1
Operations:
webhook
Template UID: f29e2dd0-5918-48a7-b943-23f36b91690f
Created: true
Events: <none>
Could you please help out with resolving this error?
The function cidr_contains does not accept sets as parameters, see documentation. I used the function cidr_contains_matches instead as follows:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdenyegress
spec:
crd:
spec:
names:
kind: K8sDenyEgress
validation:
openAPIV3Schema:
properties:
cidr:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenyegress
violation [{"msg": msg}] {
input.review.object.kind == "NetworkPolicy"
egress_cidrs := { cidr | cidr := input.review.object.spec.egress[_].to[_].ipBlock.cidr}
whitelist_cidrs := { cidr | cidr := input.parameters.cidr[_]}
matches := net.cidr_contains_matches(whitelist_cidrs, egress_cidrs)
matched := { cidr | cidr := matches[_][1]}
not_matched := egress_cidrs - matched
count(not_matched) > 0
msg := sprintf("The network policy '%s' contains egress cidrs that are not contained in whitelist: %s", [input.review.object.metadata.name, not_matched])
}
violation [{"msg": msg}] {
input.review.object.kind == "NetworkPolicy"
not input.review.object.spec.egress[0].to
msg := sprintf("The network policy '%s' contains an empty egress (allow all), which is not permitted.", [input.review.object.metadata.name])
}
For those wondering how to check for errors:
kubectl describe constrainttemplate.templates.gatekeeper.sh/k8sdenyegress
Related
I have this kind of yaml file to define a trigger
`
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: app-template-pr-deploy
spec:
params:
- name: target-branch
- name: commit
- name: actor
- name: pull-request-number
- name: namespace
resourcetemplates:
- apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
generateName: app-pr-$(tt.params.actor)-
labels:
actor: $(tt.params.actor)
spec:
serviceAccountName: myaccount
pipelineRef:
name: app-pr-deploy
podTemplate:
nodeSelector:
location: somelocation
params:
- name: branch
value: $(tt.params.target-branch)
** - name: namespace
value: $(tt.params.target-branch)**
- name: commit
value: $(tt.params.commit)
- name: pull-request-number
value: $(tt.params.pull-request-number)
resources:
- name: app-cluster
resourceRef:
name: app-location-cluster
`
The issue is that sometime target-branch is like "integration/feature" and then the namespace is not valid
I would like to check if there is an unvalid character in the value and replace it if there is.
Any way to do it ?
Didn't find any valuable way to do it beside creating a task to execute this via shell script later in the pipeline.
This is something you could do from your EventListener, using something such as:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: xx
spec:
triggers:
- name: demo
interceptors:
- name: addvar
ref:
name: cel
params:
- name: overlays
value:
- key: branch_name
expression: "body.ref.split('/')[2]"
bindings:
- ref: your-triggerbinding
template:
ref: your-triggertemplate
Then, from your TriggerTemplate, you would add the "branch_name" param, parsed from your EventListener.
Note: payload from git notification may vary. Sample above valid with github. Translating remote/origin/master into master, or abc/def/ghi/jkl into ghi.
I've created a separate task that is doing all the magic I needed and output a valid namespace name into a different variable.
Then instead of use namespace variable, i use valid-namespace all the way thru the pipeline.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: validate-namespace-task-v1
spec:
description: >-
This task will validate namespaces
params:
- name: namespace
type: string
default: undefined
results:
- name: valid-namespace
description: this should be a valid namespace
steps:
- name: triage-validate-namespace
image: some-image:0.0.1
script: |
#!/bin/bash
echo -n "$(params.namespace)" | sed "s/[^[:alnum:]-]/-/g" | tr '[:upper:]' '[:lower:]'| tee $(results.valid-namespace.path)
Thanks
I have multiple prometheusRules(rule a, rule b), and each rule defined different exp to constraint the alert; then, I have different AlertmanagerConfig(one receiver is slack, then other one's receiver is opsgenie); How can we make a connection between rules and alertmanagerconfig? for example: if rule a is triggered, I want to send message to slack; if rule b is triggered, I want to send message to opsgenie.
Here is what I tried, however, that does not work. Did I miss something?
This is prometheuisRule file
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
labels:
prometheus: service-prometheus
role: alert-rules
app: kube-prometheus-stack
release: monitoring-prom
name: rule_a
namespace: monitoring
spec:
groups:
- name: rule_a_alert
rules:
- alert: usage_exceed
expr: salesforce_api_usage > 100000
labels:
severity: urgent
This is alertManagerConfig
apiVersion: monitoring.coreos.com/v1alpha1
kind: AlertmanagerConfig
metadata:
labels:
alertmanagerConfig: slack
name: slack
namespace: monitoring
resourceVersion: "25842935"
selfLink: /apis/monitoring.coreos.com/v1alpha1/namespaces/monitoring/alertmanagerconfigs/opsgenie-and-slack
uid: fbb74924-5186-4929-b363-8c056e401921
spec:
receivers:
- name: slack-receiver
slackConfigs:
- apiURL:
key: apiURL
name: slack-config
route:
groupBy:
- job
groupInterval: 60s
groupWait: 60s
receiver: slack-receiver
repeatInterval: 1m
routes:
- matchers:
- name: job
value: service_a
receiver: slack-receiver
You need to match on a label of the alert, in your case you're trying to match on the label job with the value service_a which doesn't exist. You could either match on a label that does exist in the prometheuisRule file, eg severity, by changing the match in the alertManagerConfig file:
route:
routes:
- match:
severity: urgent
receiver: slack-receiver
or you could add another label to the prometheuisRule file:
spec:
groups:
- name: rule_a_alert
rules:
- alert: usage_exceed
expr: salesforce_api_usage > 100000
labels:
severity: urgent
job: service_a
As the title indicates I'm trying to setup grafana using helmfile with a default dashboard via values.
The relevant part of my helmfile is here
releases:
...
- name: grafana
namespace: grafana
chart: stable/grafana
values:
- datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.prometheus.svc.cluster.local
isDefault: true
- dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
- dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
As far as I can understand by reading here I need a provider and then I can refer to a dashboard by url. However when I do as shown above no dashboard is installed and when I do as below (which as )
- dashboards:
default:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
I get the following error message
Error: render error in "grafana/templates/deployment.yaml": template: grafana/templates/deployment.yaml:148:20: executing "grafana/templates/deployment.yaml" at <$value>: wrong type for value; expected map[string]interface {}; got string
Any clues about what I'm doing wrong?
I think the problem is that you're defining the datasources, dashboardProviders and dashboards as lists rather than maps so you need to remove the hyphens, meaning that the values section becomes:
values:
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus-prometheus-server
access: proxy
isDefault: true
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboards:
default:
k8s:
url: https://grafana.com/api/dashboards/8588/revisions/1/download
The grafana chart has them as maps and using helmfile doesn't change that
I am trying to write a kubernetes crd validation schema. I have an array (vc) of structures and one of the fields in those structures is required (name field).
I tried looking through various examples but it doesn't generate error when name is not there. Any suggestions what is wrong ?
vc:
type: array
items:
type: object
properties:
name:
type: string
address:
type: string
required:
- name
If you are on v1.8, you will need to enable the CustomResourceValidation feature gate for using the validation feature. This can be done by using the following flag on kube-apiserver:
--feature-gates=CustomResourceValidation=true
Here is an example of it working (I tested this on v1.12, but this should work on earlier versions as well):
The CRD:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: foos.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
version: v1
scope: Namespaced
names:
plural: foos
singular: foo
kind: Foo
validation:
openAPIV3Schema:
properties:
spec:
properties:
vc:
type: array
items:
type: object
properties:
name:
type: string
address:
type: string
required:
- name
The custom resource:
apiVersion: "stable.example.com/v1"
kind: Foo
metadata:
name: new-foo
spec:
vc:
- address: "bar"
Create the CRD.
kubectl create -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/foos.stable.example.com created
Get the CRD and check if the validation field exists in the output. If it doesn't, you probably don't have the feature gate turned on.
kubectl get crd foos.stable.example.com -oyaml
Try to create the custom resource. This should fail with:
kubectl create -f cr-validation.yaml
The Foo "new-foo" is invalid: []: Invalid value: map[string]interface {}{"metadata":map[string]interface {}{"creationTimestamp":"2018-11-18T19:45:23Z", "generation":1, "uid":"7d7f8f0b-eb6a-11e8-b861-54e1ad9de0be", "name":"new-foo", "namespace":"default"}, "spec":map[string]interface {}{"vc":[]interface {}{map[string]interface {}{"address":"bar"}}}, "apiVersion":"stable.example.com/v1", "kind":"Foo"}: validation failure list:
spec.vc.name in body is required
I am trying to install Jhipster Registry on Kubernetes as stateful set with the given (git as jhipster-registry.yml).
I see the services & statefulset coming up. I dont see the worker PODs :-(
Could you share, how to get the worker PODs up .
Update:
I am totally new to this jhipster & kubernetes. Thx for your response. Pasted the jhipster-register.yml below.
Cmds i have tried is to kubectl create -f jhipster-register.yml.
Error # kubectl describe statefulset
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: application-config
Optional: false
Volume Claims: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 5s 4599 statefulset Warning FailedCreate create Pod jhipster-registry-0 in StatefulSet jhipster-registry failed error: Pod "jhipster-registry-0" is invalid: spec.containers[0].env[8].name: Invalid value: "JHIPSTER_LOGGING_LOGSTASH_ENABLED=true": a valid C identifier must start with alphabetic character or '_', followed by a string of alphanumeric characters or '_' (e.g. 'my_name', or 'MY_NAME', or 'MyName', regex used for validation is '[A-Za-z_][A-Za-z0-9_]*')
YML:
apiVersion: v1
kind: Service
metadata:
name: jhipster-registry
labels:
app: jhipster-registry
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
ports:
- port: 9761
clusterIP: None
selector:
app: jhipster-registry
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: jhipster-registry
spec:
serviceName: jhipster-registry
replicas: 2
template:
metadata:
labels:
app: jhipster-registry
spec:
terminationGracePeriodSeconds: 10
containers:
- name: jhipster-registry
image: jhipster/jhipster-registry:v3.2.3
ports:
- containerPort: 9761
env:
- name: CLUSTER_SIZE
value: "2"
- name: STATEFULSET_NAME
value: "jhipster-registry"
- name: STATEFULSET_NAMESPACE
value: "stage"
- name: SPRING_PROFILES_ACTIVE
value: prod,swagger,native
- name: SECURITY_USER_PASSWORD
value: admin
- name: JHIPSTER_SECURITY_AUTHENTICATION_JWT_SECRET
value: secret-is-nothing-its-just-inside-you
- name: EUREKA_CLIENT_FETCH_REGISTRY
value: 'true'
- name: EUREKA_CLIENT_REGISTER_WITH_EUREKA
value: 'true'
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
- name: GIT_URI
value: https://github.com/jhipster/jhipster-registry/
- name: GIT_SEARCH_PATHS
value: central-config
command:
- "/bin/sh"
- "-ec"
- |
HOSTNAME=$(hostname)
export EUREKA_INSTANCE_HOSTNAME="${HOSTNAME}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local"
echo "Setting EUREKA_INSTANCE_HOSTNAME=${EUREKA_INSTANCE_HOSTNAME}"
echo "Configuring Registry Replicas for CLUSTER_SIZE=${CLUSTER_SIZE}"
LAST_POD_INDEX=$((${CLUSTER_SIZE} - 1))
REPLICAS=""
for i in $(seq 0 $LAST_POD_INDEX); do
REPLICAS="${REPLICAS}http://admin:${SECURITY_USER_PASSWORD}#${STATEFULSET_NAME}-${i}.jhipster-registry.${STATEFULSET_NAMESPACE}.svc.cluster.local:8761/eureka/"
if [ $i -lt $LAST_POD_INDEX ]; then
REPLICAS="${REPLICAS},"
fi
done
export EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=$REPLICAS
echo "EUREKA_CLIENT_SERVICE_URL_DEFAULTZONE=${REPLICAS}"
java -jar /jhipster-registry.war --spring.cloud.config.server.git.uri=${GIT_URI} --spring.cloud.config.server.git.search-paths=${GIT_SEARCH_PATHS} -Djava.security.egd=file:/dev/./urandom
volumeMounts:
- name: config-volume
mountPath: /central-config
volumes:
- name: config-volume
configMap:
name: application-config
Your StatefulSet is invalid because of invalid ENV name.
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED=true
value: 'true'
You have used ENV name JHIPSTER_LOGGING_LOGSTASH_ENABLED=true which is invalid.
Correct format: [A-Za-z_][A-Za-z0-9_]*
Thats why Pods are not coming
Change StatefulSet to use as
- name: JHIPSTER_LOGGING_LOGSTASH_ENABLED
value: 'true'
This will work.