kubernetes YML, use for loop - kubernetes

Is there a mechanism in a template (yml) file in k8s for generating multiple items? Like a for loop.
For example , I have a template used my multiple projects. But some need 1 database others maybe 2, 3 or more. I don't want to define all variables by hands, and edit my file if a new projects comes in and needs a new variable.
Example of what I do today:
template:
(...)
containers:
(...)
env:
- name: DATABASE_NAME
value: "{{prj.database.name}}"
- name: DATABASE_NAME_2
value: "{{prj.database.name.second}}"
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: "{{ k8s.deploy.app.secret.name }}"
key: database_user
- name: DATABASE_USER_2
valueFrom:
secretKeyRef:
name: "{{ k8s.deploy.app.secret.name }}"
key: database_user_2
As you can see I have to copy paste code for each user + password + database name. In this file and also in secrets. I use XLDeploy to insert data in my YML files.
What i'm looking for:
template:
(...)
containers:
(...)
env:
---- for i in {{prj.database.number}} ----
- name: DATABASE_NAME_{i}
value: "{{prj.database.name.{i]}}"
- name: DATABASE_USER_{i}
valueFrom:
secretKeyRef:
name: "{{ k8s.deploy.app.secret.name }}"
key: database_user_{i}
---- end for loop ----
So I could insert in XLDeploy my number of databases to use for each projects. And fill the values with XLD too.
Is that possible, or I need to use a script langague to generate a YML file from a "YML template" ?

You could use helm for this if you have other yamls related to this file. Helm allows for syntax like this:
{{- range .Values.testvalues }}
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels: {}
name: {{ . }}
spec:
ports:
- name: tcp-80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: rss-{{ . }}
type: ClusterIP
{{- end }}
So with a list for testvalues in the values.yaml like this:
testvalues:
- a
- b
- c
- d
Helm would generate four service declarations -- one for each item. The first looking like this:
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels: {}
name: a
spec:
ports:
- name: tcp-80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: rss-a
type: ClusterIP
The other option if it's just a single file, is to use Jinja and something like Python to build the file using the jinja2 module

Related

How to apply the imported realm configuration file of keycloak when deploying on k8s

My file directory looks like below:
deployment.yaml
config.yaml
import
realm.json
This is the deployment.yaml file that I used based on the suggestion from Harsh Manvar:
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
selector:
app: keycloak
type: NodePort
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
nodePort: 32488
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:17.0.1
args:
- "start-dev"
- "--import-realm"
env:
- name: KEYCLOAK_ADMIN
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: KC_PROXY
value: "edge"
volumeMounts:
- name: keycloak-volume
mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
initialDelaySeconds: 120
volumes:
- name: keycloak-volume
configMap:
name: keycloak-configmap
And my config.ymal looks like this (where the json_content is where I copy paste the content of the imported realm JSON file):
apiVersion: v1
data:
realm.json: |
{json_content}
kind: ConfigMap
metadata:
name: keycloak-configmap
But when I accessed to the keycloak dash's web GUI, the imported realm did not show up.
try with once
- mountPath: "/import/realm.json"
name: "keycloak-volume"
readOnly: true
subPath: "realm.json"
On older version(i think widelyfy onces) it was supported to import the keycloak realm using environment variables however it is stopped now : https://github.com/keycloak/keycloak/issues/10216
also, it's supported in version 18 you are using the 17
still with 17 you can give it try by passing an argument to the deployment config : official import doc
args:
- "start-dev"
- "--import-realm"
also if you also check thread some are suggesting to use variable : KEYCLOAK_REALM_IMPORT
i also come across this blog which point legacy option to import the realm do check it out once: http://www.mastertheboss.com/keycloak/keycloak-with-docker/

Kubernetes YAML file with if-else condition

I have yaml file which use to deploy my application in all the environments. I want to add some JVM args only for test environment . is there anyway i can do it in YAML file?
here is the yaml
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
-Denable.scan=true
"
here i want -Denable.scan=true to be conditional and should add only for Test environment .
I tried following way but it not working and kubernete throwing error error converting YAML to JSON: yaml: line 53: did not find expected key
Tried:-
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
${{if eq "TEST" "TEST" }} # just sample condition , it will change
-Denable.scan=true
${{end }}
"
helm will do that. In fact, the syntax is almost identical to what you've put, and would be something like this:
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
{{- if eq .Values.profile "TEST" }}
-Denable.scan=true
{{- end }}
"
And you declare via the install package (called a Chart) which profile you want to use (i.e. you set the .Values.profile value)
You can check out https://helm.sh/ for details and examples

error when creating "deployment.yaml", Deployment in version "v1" cannot be handled as a Deployment

I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on AWS. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
apiVersion: v1
kind: Service
metadata:
name: ghost
labels:
app: ghost
spec:
ports:
- port: 80
selector:
app: ghost
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
containers:
- image: ghost:4-alpine
name: ghost
env:
- name: database_client
valueFrom:
secretKeyRef:
name: eks-keys
key: client
- name: database_connection_host
valueFrom:
secretKeyRef:
name: eks-keys
key: host
- name: database_connection_user
valueFrom:
secretKeyRef:tha
- name: database_connection_password
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcp
- name: database_connection_database
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcd
ports:
- containerPort: 2368
name: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
volumes:
- name: ghost-persistent-storage
persistentVolumeClaim:
claimName: efs-ghost
I ran this line on cmd in the folder container:
kubectl create -f deployment-ghost.yaml --validate=false
service/ghost created
Error from server (BadRequest): error when creating "deployment-ghost.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.ValueFrom: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|lueFrom":"secretKeyR|..., bigger context ...|},{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},{"name":"database_connection_pa|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},
Your spec has error:
...
- name: database_connection_user # <-- The error message points to this env variable
valueFrom:
secretKeyRef:
name: <secret name, eg. eks-keys>
key: <key in the secret>
...

Import data to config map from kubernetes secret

I'm using a kubernetes ConfigMap that contains database configurations for an app and there is a secret that has the database password.
I need to use this secret in the ConfigMap so when I try to add environment variable in the ConfigMap and specify the value in the pod deployment from the secret I'm not able to connect to mysql with password as the values in the ConfigMap took the exact string of the variable.
apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
APP_CONFIG: |
port: 8080
databases:
default:
connector: mysql
host: "mysql"
port: "3306"
user: "root"
password: "$DB_PASSWORD"
and the deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
ports:
- name: "8080"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
Note: the secret exist and I'm able to get "mysql-root-password" value and use to login to the database
Kubernetes can't make that substitution for you, you should do it with shell in the entrypoint of the container.
This is a working example. I modify the default entrypoint to create a new variable with that substitution. After this command you should add the desired entrypoint.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
app: backend
spec:
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: app
image: simple-app-image
command:
- /bin/bash
- -c
args:
- "NEW_APP_CONFIG=$(echo $APP_CONFIG | envsubst) && echo $NEW_APP_CONFIG && <INSERT IMAGE ENTRYPOINT HERE>"
ports:
- name: "app"
containerPort: 8080
env:
- name: APP_CONFIG
valueFrom:
configMapKeyRef:
name: config
key: APP_CONFIG
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: "mysql-secret"
key: "mysql-root-password"
You could do something like this in HELM:
{{- define "getValueFromSecret" }}
{{- $len := (default 16 .Length) | int -}}
{{- $obj := (lookup "v1" "Secret" .Namespace .Name).data -}}
{{- if $obj }}
{{- index $obj .Key | b64dec -}}
{{- else -}}
{{- randAlphaNum $len -}}
{{- end -}}
{{- end }}
Then you could do something like this in configmap:
{{- include "getValueFromSecret" (dict "Namespace" .Release.Namespace "Name" "<secret_name>" "Length" 10 "Key" "<key>") -}}
The secret should be already present while deploying; or you can control the order of deployment using https://github.com/vmware-tanzu/carvel-kapp-controller
I would transform the whole configMap into a secret and deploy the database password directly in there.
Then you can mount the secret as a file to a volume and use it like a regular config file in the container.

Host specific volumes in Kubernetes manifests

I am fairly sure this isn't possible, but I wanted to check.
I am using Kubernetes stateful sets, so my hosts get obvious hostnames.
I'd like them to provision a hostPath mount that is mapped to their hostname.
An example helm chart that I'm using might look like this:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: app
namespace: '{{ .Values.name }}'
labels:
chart: '{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}'
spec:
serviceName: "app"
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: app
spec:
terminationGracePeriodSeconds: 30
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}/{{ .Values.image.version}}"
imagePullPolicy: '{{ .Values.image.pullPolicy }}'
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: {{ .Values.baseport | add 80 }}
name: app
volumeMounts:
- mountPath: /NAS/$(POD_NAME)
name: store
readOnly: true
volumes:
- name: store
hostPath:
path: /NAS/$(POD_NAME)
Essentially, instead of hardcoding a volume, I'd like to have some kind of dynamic variable as the path. I don't mind using helm or the downward API for this, but ideally it would work when I scale the stateful set outwards.
Is there any way of doing this? All my docs reading seems to think it's not... :(