I have yaml file which use to deploy my application in all the environments. I want to add some JVM args only for test environment . is there anyway i can do it in YAML file?
here is the yaml
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
-Denable.scan=true
"
here i want -Denable.scan=true to be conditional and should add only for Test environment .
I tried following way but it not working and kubernete throwing error error converting YAML to JSON: yaml: line 53: did not find expected key
Tried:-
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
${{if eq "TEST" "TEST" }} # just sample condition , it will change
-Denable.scan=true
${{end }}
"
helm will do that. In fact, the syntax is almost identical to what you've put, and would be something like this:
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
{{- if eq .Values.profile "TEST" }}
-Denable.scan=true
{{- end }}
"
And you declare via the install package (called a Chart) which profile you want to use (i.e. you set the .Values.profile value)
You can check out https://helm.sh/ for details and examples
Related
Let's assume that you want to inject an extra container to all the Pods submitted to the cluster.
You could save the YAML configuration for the extra container as a YAML file called file:
fileA:
apiVersion: v1
kind: Pod
metadata:
name: envoy-pod
spec:
containers:
- name: proxy-container
image: envoyproxy/envoy:v1.12.2
ports:
- containerPort: 80
and you have fileB
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: DB_URL
value: postgres://db_url:5432
The output should be
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: DB_URL
value: postgres://db_url:5432
- name: proxy-container
image: envoyproxy/envoy:v1.12.2
ports:
- containerPort: 80
This was possible in the older version of yq by
yq m -a append fileA.yaml fileB.yaml
However, this appears not possible in v4 - any suggestions?
You can now use the merge/append operator *+:
yq '. *+ load("fileB.yaml")' fileA.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: proxy-container
image: envoyproxy/envoy:v1.12.2
ports:
- containerPort: 80
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: DB_URL
value: postgres://db_url:5432
gojq won't retain the original ordering of keys, but if that is not a concern you could go with:
gojq --yaml-output --yaml-input '.spec.containers += input.spec.containers' y2.yaml y1.yaml
I am unable to deploy this file by using
kubectl apply -f command
Deployment YAML image
I have provided the YAML file required for your deployment. It is important that all the lines are indented correctly. Hyphens (-) indicate a list item. Therefore, it is not required to use them on every line.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-deployment
namespace: abc
spec:
replicas: 3
selector:
matchLabels:
app: abc-deployment
template:
metadata:
labels:
app: abc-deployment
spec:
containers:
- name: abc-deployment
image: anyimage
ports:
- containerPort: 80
env:
- name: APP_VERSION
value: v1
- name: ENVIRONMENT
value: "123"
- name: DATA
valueFrom:
configMapKeyRef:
name: abc-configmap
key: data
imagePullPolicy: IfNotPresent
restartPolicy: Always
imagePullSecrets:
- name: abc-secret
As a side note, the way envFrom was used is incorrect. It must be within the container env section, and formatted as such in the example above (see the DATA env variable).
If you are using Visual Studio Code, there is an official Kubernetes extension from Microsoft that provides Intellisense (suggestions) and alerts you to errors.
Hope this helps.
Is there a mechanism in a template (yml) file in k8s for generating multiple items? Like a for loop.
For example , I have a template used my multiple projects. But some need 1 database others maybe 2, 3 or more. I don't want to define all variables by hands, and edit my file if a new projects comes in and needs a new variable.
Example of what I do today:
template:
(...)
containers:
(...)
env:
- name: DATABASE_NAME
value: "{{prj.database.name}}"
- name: DATABASE_NAME_2
value: "{{prj.database.name.second}}"
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: "{{ k8s.deploy.app.secret.name }}"
key: database_user
- name: DATABASE_USER_2
valueFrom:
secretKeyRef:
name: "{{ k8s.deploy.app.secret.name }}"
key: database_user_2
As you can see I have to copy paste code for each user + password + database name. In this file and also in secrets. I use XLDeploy to insert data in my YML files.
What i'm looking for:
template:
(...)
containers:
(...)
env:
---- for i in {{prj.database.number}} ----
- name: DATABASE_NAME_{i}
value: "{{prj.database.name.{i]}}"
- name: DATABASE_USER_{i}
valueFrom:
secretKeyRef:
name: "{{ k8s.deploy.app.secret.name }}"
key: database_user_{i}
---- end for loop ----
So I could insert in XLDeploy my number of databases to use for each projects. And fill the values with XLD too.
Is that possible, or I need to use a script langague to generate a YML file from a "YML template" ?
You could use helm for this if you have other yamls related to this file. Helm allows for syntax like this:
{{- range .Values.testvalues }}
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels: {}
name: {{ . }}
spec:
ports:
- name: tcp-80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: rss-{{ . }}
type: ClusterIP
{{- end }}
So with a list for testvalues in the values.yaml like this:
testvalues:
- a
- b
- c
- d
Helm would generate four service declarations -- one for each item. The first looking like this:
---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels: {}
name: a
spec:
ports:
- name: tcp-80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: rss-a
type: ClusterIP
The other option if it's just a single file, is to use Jinja and something like Python to build the file using the jinja2 module
The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000
I am trying to upgrade one of my chart. But the changes which I have made in the "deployment.yaml" template in the chart is not there after the upgrade. I added the following lines in the spec of my kubernetes deployment.yaml file
spec:
containers:
- env:
- name: LOGBACK_DB_ACQUIRE_INCREMENT
value: "1"
- name: LOGBACK_DB_MAX_IDLE_TIME_EXCESS_CONNECTIONS
value: "10"
- name: LOGBACK_DB_MAX_POOL_SIZE
value: "2"
- name: LOGBACK_DB_MIN_POOL_SIZE
value: "1"
I tried upgrading using the following command
helm upgrade ironic-molly spring-app-0.1.2.tgz --recreate-pods
Where "ironic-molly" is the release-name and spring-app-0.1.2.tgz is my chart with changes.
Helm output says that the package is upgraded, but the changes which i have made is missing in the deployment.yaml. What might be causing this issue.?
Regards,
Muhammed Roshan
syntax (indents)
spec:
containers:
- env:
- name: LOGBACK_DB_ACQUIRE_INCREMENT
value: "1"
- name: LOGBACK_DB_MAX_IDLE_TIME_EXCESS_CONNECTIONS
value: "10"
- name: LOGBACK_DB_MAX_POOL_SIZE
value: "2"
- name: LOGBACK_DB_MIN_POOL_SIZE
value: "1"
should do the trick
(If the problem is not with indents - adding an answer that also matches the title in general).
A few points to consider when you upgrade your helm charts:
1 ) Add the --debug to the helm upgrade command.
2 ) Check the current values of the specific resource - for example deployment: kubectl get deploy <deploymnet name> -o yaml.
3 ) Check latest events: kubectl get events -n <namespace>.
4 ) Check latest logs: kubectl logs -l name=myLabel.
5 ) If you want to ensure that pods are re-created - add a specific timestamp via annotation:
kind: Deployment
metadata:
...
spec:
template:
metadata:
labels:
app: k8s-dashboard
annotations:
timestamp: "{{ date "20060102150405" .Release.Time }}"
I think issue with your indents. I tested with my cluster it works. env tag should start same place as image: in your example it start below containers.
spec:
replicas: 1
template:
metadata:
labels:
app: envtest
release: ugly-lizzard
spec:
containers:
- name: envtest
image: "nginx:stable"
imagePullPolicy: IfNotPresent
env:
- name: SSHD
value: disalbe
ports:
- containerPort: 80