In a kubernetes operator, what's the correct way to trigger reconciliation on another resource type? - kubernetes

I have written an Ansible operator using the operator-sdk that assists with our cluster on boarding: from a single ProjectDefinition resource, the operator provisions a namespace, groups, rolebindings, resourcequotas, and limitranges.
The resourcequotas and limitranges are defined in ProjectQuota resource, and are referenced by name in the ProjectDefinition.
When the content of a ProjectQuota resource is modified, I want to ensure that those changes propagate to any managed namespaces that use the named quota. Right now, I'm doing it this way:
calculating a content hash for the ProjectQuota
Look up namespaces that reference the quota using a label selector
Annotate the discovered namespaces with the content hash
The namespace update triggers a reconciliation of the associated ProjectDefinition because of the ownerReference on the namespace, and this in turn propagates the changes in the ProjectQuota.
In other words, the projectquota role does this:
---
# tasks file for ProjectQuota
- name: Calculate content_hash
set_fact:
content_hash: "{{ quotadef|to_json|hash('sha256') }}"
vars:
quotadef:
q: "{{ resource_quota|default({}) }}"
l: "{{ limit_range|default({}) }}"
- name: Lookup up affected namespaces
kubernetes.core.k8s_info:
api_version: v1
kind: namespace
label_selectors:
- "example.com/named-quota = {{ ansible_operator_meta.name }}"
register: namespaces
- name: Update namespaces
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ namespace.metadata.name }}"
annotations:
example.com/named-quota-hash: "{{ content_hash }}"
loop: "{{ namespaces.resources }}"
loop_control:
loop_var: namespace
label: "{{ namespace.metadata.name }}"
And the projectdefinition role does (among other things) this:
- name: "{{ ansible_operator_meta.name }} : handle named quota"
when: >-
quota.quota_name|default(false)
block:
- name: "{{ ansible_operator_meta.name }} : look up named quota"
kubernetes.core.k8s_info:
api_version: "{{ apiVersion }}"
kind: ProjectQuota
name: "{{ quota.quota_name }}"
register: named_quota
- name: "{{ ansible_operator_meta.name }} : label namespace"
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ ansible_operator_meta.name }}"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/named-quota: "{{ quota.quota_name }}"
- name: "{{ ansible_operator_meta.name }} : apply resourcequota"
when: >-
"resourceQuota" in named_quota.resources[0].spec
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ResourceQuota
metadata:
name: "default"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/project: "{{ ansible_operator_meta.name }}"
spec: "{{ named_quota.resources[0].spec.resourceQuota }}"
- name: "{{ ansible_operator_meta.name }} : apply limitrange"
when: >-
"limitRange" in named_quota.resources[0].spec
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: LimitRange
metadata:
name: "default"
namespace: "{{ ansible_operator_meta.name }}"
labels:
example.com/project: "{{ ansible_operator_meta.name }}"
spec: "{{ named_quota.resources[0].spec.limitRange }}"
This all works, but this seems like the sort of thing for which there
is probably a canonical solution. Is this it? I initially tried using
the generation on the ProjectQuota instead of a content hash, but
this value isn't exposed to the role by the Ansible operator.

Related

Helm Go Templating using Dictionary. I am need help in getting the values

I can not find a way to iterate over a range in helm templating. I have the next definition in my values.yaml
Variable dictionary to be consumed
projects:
- tenants: imc
namespaces:
- name: test-1
company: inter
environments:
- build
- dev
- stage
- test
- name: test-2
environments:
- build
- dev
- stage
- test
- name: test-3
environments:
- build
- dev
- stage
- test
Code snippet
{{- range $key, $value := .Values.tenants }}
{{- range $nkey, $nvalue := .namespaces }}
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
parent_project_name: {{ $value.name }}
company: {{ $value.company }}
openshift.io/description: ""
openshift.io/display-name: ""
labels:
tenant: {{ $value.tenants }}
name: {{ $value.name }}-{{ $nvalue }}
spec: {}
status: {}
{{- end }}
{{- end }}
I need help in consuming the variable into the template
May you give the expected output? This is what I guess.
values.yaml
projects:
- tenants: imc
namespaces:
- name: test-1
company: inter
environments:
- build
- dev
- stage
- test
- name: test-2
environments:
- build
- dev
- stage
- test
- name: test-3
environments:
- build
- dev
- stage
- test
templates/ns.yaml
{{- range $key, $value := .Values.projects }}
{{- range $nkey, $nvalue := .namespaces }}
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
parent_project_name: {{ $nvalue.name }}
company: {{ $nvalue.company }}
openshift.io/description: ""
openshift.io/display-name: ""
labels:
tenant: {{ $value.tenants }}
name: {{ $nvalue.name }}
spec: {}
status: {}
{{- end }}
{{- end }}
cmd
helm template --debug test .
output
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
parent_project_name: test-1
company: inter
openshift.io/description: ""
openshift.io/display-name: ""
labels:
tenant: imc
name: test-1
spec: {}
status: {}
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
parent_project_name: test-2
company:
openshift.io/description: ""
openshift.io/display-name: ""
labels:
tenant: imc
name: test-2
spec: {}
status: {}
---
apiVersion: v1
kind: Namespace
metadata:
annotations:
parent_project_name: test-3
company:
openshift.io/description: ""
openshift.io/display-name: ""
labels:
tenant: imc
name: test-3
spec: {}
status: {}

Unable to deploy Kubernetes secrets using Helm

I'm trying to create my first Helm release on an AKS cluster using a GitLab pipeline,
but when I run the following command
- helm upgrade server ./aks/server
--install
--namespace demo
--kubeconfig ${CI_PROJECT_DIR}/.kube/config
--set image.name=${CI_PROJECT_NAME}/${CI_PROJECT_NAME}-server
--set image.tag=${CI_COMMIT_SHA}
--set database.user=${POSTGRES_USER}
--set database.password=${POSTGRES_PASSWORD}
I receive the following error:
"Error: Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data:
decode base64: illegal base64 data at input byte 8, error found in #10 byte of ..."
It looks like something is not working with the secrets file, but I don't understand what.
The secret.yaml template file is defined as follows:
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user }}
Host: {{ .Values.database.host }}
Database: {{ .Values.database.name }}
Password: {{ .Values.database.password }}
Port: {{ .Values.database.port }}
I will also add the deployment and the service .yaml files.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.app.name }}
labels:
app: {{ .Values.app.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
stack: node
app: {{ .Values.app.name }}
template:
metadata:
labels:
tier: backend
stack: node
app: {{ .Values.app.name }}
spec:
containers:
- name: {{ .Values.app.name }}
image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
imagePullPolicy: IfNotPresent
env:
- name: User
valueFrom:
secretKeyRef:
name: server-secret
key: User
optional: false
- name: Host
valueFrom:
secretKeyRef:
name: server-secret
key: Host
optional: false
- name: Database
valueFrom:
secretKeyRef:
name: server-secret
key: Database
optional: false
- name: Password
valueFrom:
secretKeyRef:
name: server-secret
key: Password
optional: false
- name: Ports
valueFrom:
secretKeyRef:
name: server-secret
key: Ports
optional: false
resources:
limits:
cpu: "1"
memory: "128M"
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: server-service
spec:
type: ClusterIP
selector:
tier: backend
stack: node
app: {{ .Values.app.name }}
ports:
- protocol: TCP
port: 3000
targetPort: 3000
Any hint?
You have to encode secret values to base64
Check the doc encoding-functions
Try below code
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
data:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}
Else use stringData instead of data
stringData will allow you to create the secrets without encode to base64
Check the example in the link
apiVersion: v1
kind: Secret
metadata:
name: server-secret
namespace: demo
type: Opaque
stringData:
User: {{ .Values.database.user | b64enc }}
Host: {{ .Values.database.host | b64enc }}
Database: {{ .Values.database.name | b64enc }}
Password: {{ .Values.database.password | b64enc }}
Port: {{ .Values.database.port | b64enc }}

Ansible Kubernetes Job arg inject random string

I am thinking about creating a Kubernetes job in Ansible with random string (password) generated on the fly and injected to the args/command line. However I am not sure if what I am trying to achieve will work as the below Jinja template itself already imports data from the values YAML file.
apiVersion: batch/v1
kind: Job
metadata:
namespace: {{ deployment.namespace }} <- taken from the values YAML
name: create-secret
labels:
app: test
app.kubernetes.io/name: create-secret
app.kubernetes.io/component: test
app.kubernetes.io/part-of: test
app.kubernetes.io/managed-by: test
annotations:
spec:
backoffLimit: 0
template:
metadata:
namespace: {{ deployment.namespace }} <- taken from the values YAML
name: create-secret
labels:
app: test
app.kubernetes.io/name: create-secret
app.kubernetes.io/component: test
app.kubernetes.io/part-of: test
app.kubernetes.io/managed-by: test
spec:
restartPolicy: Never
containers:
- name: create-secret
command: ["/bin/bash"]
args: ["-c", "somecommand create --secret {{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1, length=15) }} --name 'test'"]
image: {{ registry.host }}/{{ images.docker.image.name }}:{{ images.docker.image.tag }} <- taken from the values YAML
It'll work fine, but (as you pointed out) due to golang/helm using the same template characters as jinja2 {{, you'll need to take one of two approaches: either wrap every golang set of mustaches in {{ "{{" }} in order for jinja2 to emit the text {{ in the resulting file, or change the jinja2 template delimiters to something other than {{
example 1
apiVersion: batch/v1
kind: Job
metadata:
namespace: {{ "{{" }} deployment.namespace {{ "}}" }}
name: create-secret
...
containers:
- name: create-secret
command: ["/bin/bash"]
args: ["-c", "somecommand create --secret {{ lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1, length=15) }} --name 'test'"]
image: {{ "{{" }} registry.host {{ "}}" }}/{{ "{{" }} images.docker.image.name {{ "}}:{{" }} images.docker.image.tag {{ "}}" }} <- taken from the values YAML
Although you'll also likely want to use | quote for that random_string since in my local example, it produce a password of 2-b19e2k#HUF=k` and that ` will be interpreted by the sh -c leading to an error
example 2
# my-job.yml.j2
apiVersion: batch/v1
kind: Job
metadata:
namespace: {{ deployment.namespace }} <- taken from the values YAML
name: create-secret
...
containers:
- name: create-secret
command: ["/bin/bash"]
args: ["-c", "somecommand create --secret [% lookup('community.general.random_string', min_lower=1, min_upper=1, min_special=1, min_numeric=1, length=15) %] --name 'test'"]
image: {{ registry.host }}/{{ images.docker.image.name }}:{{ images.docker.image.tag }} <- taken from the values YAML
- template:
src: my-job.yml.j2
dest: my-job.yml
variable_start_string: '[%'
variable_end_string: '%]'

Kubernetes - How to define ConfigMap built using a file in a yaml?

At present I am creating a configmap from the file config.json by executing:
kubectl create configmap jksconfig --from-file=config.json
I would want the ConfigMap to be created as part of the deployment and tried to do this:
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
But doesn't seem to work. What should be going into configmap.yaml so that the same configmap is created?
---UPDATE---
when I do a helm install dry run:
# Source: mychartv2/templates/jks-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |
Note: I am using minikube as my kubernetes cluster
Your config.json file should be inside your mychart/ directory, not inside mychart/templates
Chart Template Guide
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4}}
config.json
{
"val": "key"
}
helm install --dry-run --debug mychart
[debug] Created tunnel using local port: '52091'
[debug] SERVER: "127.0.0.1:52091"
...
NAME: dining-saola
REVISION: 1
RELEASED: Fri Nov 23 15:06:17 2018
CHART: mychart-0.1.0
USER-SUPPLIED VALUES:
{}
...
---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: dining-saola-configmap
data:
config.json: |-
{
"val": "key"
}
EDIT:
But I want it the values in the config.json file to be taken from values.yaml. Is that possible?
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
config.json: |-
{
{{- range $key, $val := .Values.json }}
{{ $key | quote | indent 6}}: {{ $val | quote }}
{{- end}}
}
values.yaml
json:
key1: val1
key2: val2
key3: val3
helm install --dry-run --debug mychart
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mangy-hare-configmap
data:
config.json: |-
{
"key1": "val1"
"key2": "val2"
"key3": "val3"
}
Here is an example of a ConfigMap that is attached to a Deployment:
ConfigMap:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
data:
config.json: |-
{{ .Files.Get "config.json" | indent 4 }}
Deployment:
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: jksapp
labels:
app: jksapp
spec:
selector:
matchLabels:
app: jksapp
template:
metadata:
labels:
app: jksapp
containers:
- name: jksapp
image: jksapp:1.0.0
ports:
- containerPort: 8080
volumeMounts:
- name: config #The name(key) value must match pod volumes name(key) value
mountPath: /path/to/config.json
volumes:
- name: config
configMap:
name: jksconfig
Soln 01:
insert your config.json file content into a template
then use this template into your data against config.json
then run $ helm install command
finally,
{{define "config"}}
{
"a": "A",
"b": {
"b1": 1
}
}
{{end}}
apiVersion: v1
kind: ConfigMap
metadata:
name: jksconfig
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: "my-app"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
data:
config.json: {{ (include "config" .) | trim | quote }}

Kubectl apply for a deployment with revHistoryLimit 0 does not delete the old replica set

Here is my deploment template:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
name: XXX
spec:
replicas: 1
revisionHistoryLimit : 0
strategy:
type : "RollingUpdate"
rollingUpdate:
maxUnavailable : 0%
maxSurge : 100%
selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
template:
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
spec:
containers:
- image: docker-registry:{{ xxx-version }}
name: XXX
ports:
- name: XXX
containerPort: 9000
The key section in the documentation that's relevant to this issues is:
Existing Replica Set controlling Pods whose labels match .spec.selectorbut whose template does not match .spec.template are scaled down. Eventually, the new Replica Set will be scaled to .spec.replicas and all old Replica Sets will be scaled to 0.
http://kubernetes.io/docs/user-guide/deployments/
So the spec.selector should not vary across multiple deployments:
selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
should become:
selector:
matchLabels:
name: XXX
The rest of the labels can remain the same