I have the following deployment definition:
...
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{ if .Values.env.enabled }}
env:
{{- range .Values.env.vars }}
?????What comes here?????
{{- end }}
{{ end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
...
in the values.yaml, I have defined:
env:
enabled: false
vars: []
What I would like to do is, to set environment dynamically via --set, for instance:
helm template user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set env.enabled=true \
--set env.vars.POSTGRES_URL="jdbc:postgresql://localhost:5432/users" \
--set env.vars.POSTGRES_USER="dbuser" \
./svc
after rendering, it should show:
...
containers:
- name: demo
image: game.example/demo-game
env:
- name: POSTGRES_URL
value: jdbc:postgresql://localhost:5432/users
...
and how to set the following option via --set:
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
You can access the --set option using .Values.
{{- if eq .Values.env.enabled "true" -}}
env:
- name: {{ .Values.env.vars.POSTGRES_USER }}
value: {{ .Values.env.vars.env.vars.POSTGRES_URL}}
{{- end }}
Try the above.
Related
i am running my spring boot application docker image on Kubernetes using Helm chart.
Below is the details of the same
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "xyz.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "xyz.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "xyz.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Chart.yaml
apiVersion: v2
name: xyz
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: <APP_VERSION_PLACEHOLDER>
values.yaml
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
### - If we want 3 intances then we will metion 3 -then 3 pods will be created on server
### - For staging env we usually keep 1
replicaCount: 1
image:
### --->We can also give local Image details also here
### --->We can create image in Docker repository and use that image URL here
repository: gcr.io/mgcp-109-xyz-operations/projectname
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "xyz"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
schedule: "*/5 * * * *"
###SMS2-40 - There are 2 ways how we want to serve our applications-->1st->LoadBalancer or 2-->NodePort
service:
type: NodePort
port: 8087
liveness: /actuator/health/liveness
readiness: /actuator/health/readiness
###service:
### type: ClusterIP
### port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
#application:
# configoveride: "config/application.properties"
templates/cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{- include "xyz.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "xyz.selectorLabels" . | nindent 4 }}
i ran my application first without cronjob.yaml
once my application started running on kubernetes i tried to conevrt it into kubernetes cron job hence i deleted templates/deployment.yaml and instead added templates/cronjob.yaml
after i deployed my application it ran but when i do
kubectl get cronjobs
it shows in logs No resources found in default namespace.
what i am doing wrong here,unable to figure out
i use below command to install my helm chart helm upgrade --install chartname
Not sure if you file is half but it's not ended properly EOF error might be there when the chart being tested
End part for cronjob
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
The full file should be something like
apiVersion: batch/v1
kind: CronJob
metadata:
name: test
spec:
schedule: {{ .Values.schedule }}
jobTemplate:
spec:
backoffLimit: 5
template:
spec:
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
livenessProbe:
httpGet:
path: {{ .Values.service.liveness }}
port: http
initialDelaySeconds: 60
periodSeconds: 60
readinessProbe:
httpGet:
path: {{ .Values.service.readiness }}
port: {{ .Values.service.port }}
initialDelaySeconds: 60
periodSeconds: 30
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 12 }}
{{- end }}
i just tested the above it's working fine.
Command to test helm chart template
helm template <chart name> . --output-dir ./yaml
I was also deploying deployment.yaml which was a mistake so i deleted deployment.yaml file and kept only cronjob.yaml file whose content is given below
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ include "xyz.fullname" . }}
labels:
{{ include "xyz.labels" . | nindent 4 }}
spec:
schedule: "{{ .Values.schedule }}"
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 2
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DB_USER_NAME
valueFrom:
secretKeyRef:
name: customsecret
key: DB_USER_NAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: customsecret
key: DB_PASSWORD
- name: DB_URL
valueFrom:
secretKeyRef:
name: customsecret
key: DB_URL
- name: TOKEN
valueFrom:
secretKeyRef:
name: customsecret
key: TOKEN
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DD_ENV
value: {{ .Values.datadog.env }}
- name: DD_SERVICE
value: {{ include "xyz.name" . }}
- name: DD_VERSION
value: {{ include "xyz.AppVersion" . }}
- name: DD_LOGS_INJECTION
value: "true"
- name: DD_RUNTIME_METRICS_ENABLED
value: "true"
volumeMounts:
- mountPath: /app/config
name: logback
ports:
- name: http
containerPort: {{ .Values.service.port }}
protocol: TCP
volumes:
- configMap:
name: {{ include "xyz.name" . }}
name: logback
backoffLimit: 0
metadata:
{{ with .Values.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
labels:
{{ include "xyz.selectorLabels" . | nindent 8 }}
{{- end }}
I am currently dealing with helm and kubernetis.
I would like to deploy a MISP in kubernetis, but unfortunately the helm chart always fails with the mariadb.
In the logs of the mariadb pod I see the following error message, but unfortunately I currently have no idea how to change this.
022-11-23 11:13:29 0 [Note] /opt/bitnami/mariadb/sbin/mysqld: ready for connections.
Version: '10.6.11-MariaDB' socket: '/opt/bitnami/mariadb/tmp/mysql.sock' port: 3306 Source distribution
2022-11-23 11:13:31 3 [Warning] Access denied for user 'root'#'localhost' (using password: YES)
Reading datadir from the MariaDB server failed. Got the following error when executing the 'mysql' command line client
ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
FATAL ERROR: Upgrade failed
Here are my values.yaml
# Default values for misp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: coolacid/misp-docker
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart
# appVersion. N.B. in the particular case of coolacid's
# Dockerization of MISP, the misp-docker repo has multiple different
# images, and the tags not only distinguish between versions, but
# also between images.
tag: ""
imagePullSecrets: []
nameOverride: "misp"
fullnameOverride: "misp-chart"
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
#
# It appears some of the supervisord scripts need to be root,
# because they write files in /etc/cron.d.
#
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
global:
storageClass: local-path
# Mariadb chart defaults to a single node.
mariadb:
# If you don't set mariadb.auth.password and
# mariadb.auth.root_password, you cannot effectively helm upgrade
# this chart.
auth:
username: misp
database: misp
password: misp
#root_password: misp
image:
# Without this, you don't get any logs the database server puts
# out, only nice colorful things said by supervisord scripts, such
# as "===> Starting the database."
debug: true
# Redis chart settings here are for a single node.
redis:
usePassword: false
cluster:
enabled: false
master:
persistence:
storageClass: local-path
mispModules:
enabled: true
# A hostname to connect to Redis. Ignored if empty.
redis:
hostname: ""
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
initialSetup: true
# Creating the GNUPG secrets:
# pwgen -s 32
# kubectl create secret -n misp generic --from-literal='passphrase=<PASSPHRASE>' misp-gnupg-passphrase
# cd /tmp
# mkdir mgpgh
# gpg --homedir=mgpgh --gen-key
# # ^^ when you are generating the key you say what email address it is for
# mkdir mgpghs
# gpg --homedir=mgpgh --export-secret-keys -a -o mgpghs/gnupg-private-key
# kubectl create secret -n misp generic --from-file=mgpghs misp-gnupg-private-key
# rm -rf mgpgh mgpghs
gnupg:
# A Secret containing a GnuPG private key. You must construct this
# yourself.
privateKeySecret: "misp-gnupg-private-key"
# A Secret with the passphrase to unlock the private key.
passphraseSecret: "misp-gnupg-passphrase"
# The email address for which the key was created.
emailAddress: "me#mycompany.com"
# This is constructed by the container's scripts; don't change it
#homeDirectory: "/var/www/.gnupg"
homeDirectory: "/var/www/MISP/.gnupg"
passphraseFile: "/var/www/MISP/.gnupg-passphrase"
importing:
image:
repository: 'olbat/gnupg'
pullPolicy: IfNotPresent
tag: 'light'
# Authentication/authorization via OpenID Connect. See
# <https://github.com/MISP/MISP/tree/2.4/app/Plugin/OidcAuth>. Values
# here are named with snake_case according to the convention in that
# documentation, not camelCase as is usual with Helm.
#oidc:
# Use OIDC for authn/authz.
# enabled: false
# provider_url: "https://keycloak.example.com/auth/realms/example_realm/protocol/openid-connect/auth"
# client_id: "misp"
# client_secret: "01234567-5768-abcd-cafe-012345670123"
and here are my templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "misp.fullname" . }}
labels:
{{- include "misp.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "misp.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "misp.selectorLabels" . | nindent 8 }}
spec:
volumes:
- name: gnupg-home
emptyDir: {}
{{- with .Values.gnupg }}
- name: private-key
secret:
secretName: {{ .privateKeySecret }}
- name: passphrase
secret:
secretName: {{ .passphraseSecret }}
{{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "misp.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: "gnupg-import"
securityContext:
runAsNonRoot: true
{{- /* FIXME: is this always right? */}}
runAsUser: 33
{{- with .Values.gnupg }}
{{- with .importing.image }}
image: {{ printf "%s:%s" .repository .tag | quote }}
imagePullPolicy: {{ .pullPolicy }}
{{- end }}
volumeMounts:
- name: private-key
mountPath: /tmp/misp-gpg.priv
subPath: gnupg-private-key
- name: passphrase
mountPath: /tmp/misp-gpg.passphrase
subPath: passphrase
- name: gnupg-home
mountPath: {{ .homeDirectory }}
command:
- 'gpg'
- '--homedir'
- {{ .homeDirectory }}
- '--batch'
- '--passphrase-file'
- '/tmp/misp-gpg.passphrase'
- '--import'
- '/tmp/misp-gpg.priv'
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
# this is what coolacid's docker compose.yml says
- name: CRON_USER_ID
value: "1"
# "Initialize MISP, things includes, attempting to import SQL and the Files DIR"
- name: INIT
value: {{ .Values.initialSetup | quote }}
# Don't redirect port 80 traffic to port 443.
- name: NOREDIR
value: "true"
- name: HOSTNAME
{{- if .Values.ingress.enabled }}
value: {{ printf "https://%s" (mustFirst .Values.ingress.hosts).host | quote }}
{{- else }}
value: {{ printf "http://%s" (include "misp.fullname" .) | quote }}
{{- end }}
- name: MYSQL_HOST
{{/* FIXME: can't use the mariadb.primary.fullname template because it gets evaluated in the context of this chart, and doesn't work right */}}
value: {{ .Release.Name }}-mariadb
# we allow MYSQL_PORT to take its default
- name: MYSQL_DATABASE
value: {{ .Values.mariadb.auth.database }}
- name: MYSQL_USER
value: {{ .Values.mariadb.auth.username }}
- name: MYSQL_PASSWORD
value: {{ .Values.mariadb.auth.password }}
#valueFrom:
# secretKeyRef:
# name: {{ printf "%s-%s" .Release.Name "mariadb" | quote }}
# key: mariadb-password
- name: REDIS_FQDN
value: {{ .Release.Name }}-redis-master
{{- if .Values.mispModules.enabled }}
- name: MISP_MODULES_FQDN
value: {{ printf "http://%s" (include "mispModules.fullname" .) | quote }}
{{- end }}
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
#volumeMounts:
# - name: passphrase
# mountPath: {{ .Values.gnupg.passphraseFile }}
# subPath: passphrase
# - name: gnupg-home
# mountPath: {{ .Values.gnupg.homeDirectory }}
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
startupProbe:
httpGet:
path: /
port: http
timeoutSeconds: 5
failureThreshold: 200
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Thank you for your help
greetings Tob
I try to change the Version of the mariadb
I have a helm chart with deployment.yaml having the following params:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ .Values.newAppName }}
chart: {{ template "newApp.chart" . }}
release: {{ .Release.Name }}
namespace: {{ .Release.Namespace }}
name: {{ .Values.deploymentName }}
spec:
replicas: {{ .Values.numReplicas }}
selector:
matchLabels:
app: {{ .Values.newAppName }}
template:
metadata:
labels:
app: {{ .Values.newAppName }}
namespace: {{ .Release.Namespace }}
annotations:
some_annotation: val
some_annotation: val
spec:
serviceAccountName: {{ .Values.podRoleName }}
containers:
- env:
- name: ENV_VAR1
value: {{ .Values.env_var_1 }}
image: {{ .Values.newApp }}:{{ .Values.newAppVersion }}
imagePullPolicy: Always
command: ["/opt/myDir/bin/newApp"]
args: ["-c", "/etc/config/newApp/{{ .Values.newAppConfigFileName }}"]
name: {{ .Values.newAppName }}
ports:
- containerPort: {{ .Values.newAppTLSPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
readinessProbe:
httpGet:
path: /v1/h
port: {{ .Values.newAppTLSPort }}
scheme: HTTPS
initialDelaySeconds: 2
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 20
volumeMounts:
- mountPath: /etc/config/newApp
name: config-volume
readOnly: true
- mountPath: /etc/config/metrics
name: metrics-volume
readOnly: true
- mountPath: /etc/version/container
name: container-info-volume
readOnly: true
- name: {{ template "newAppClient.name" . }}-client
image: {{ .Values.newAppClientImage }}:{{ .Values.newAppClientVersion }}
imagePullPolicy: Always
args: ["run", "--server", "--config-file=/newAppClientPath/config.yaml", "--log-level=debug", "/newAppClientPath/pl"]
volumeMounts:
- name: newAppClient-files
mountPath: /newAppClient-path
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
volumes:
- name: config-volume
configMap:
name: {{ .Values.newAppConfigMapName }}
- name: container-info-volume
configMap:
name: {{ .Values.containerVersionConfigMapName }}
- name: metrics-volume
configMap:
name: {{ .Values.metricsConfigMapName }}
- name: newAppClient-files
configMap:
name: {{ .Values.newAppClientConfigMapName }}
items:
- key: config
path: config.yaml
This helm chart is consumed by Jenkins and then deployed by Spinnaker onto AWS EKS service.
A security measure that we ensure is that /root directory should be private in all our containers, so basically it should deny permission when a user tries to manually do the same after
kubectl exec -it -n namespace_name pod_name -c container_name bash
into the container.
But when I enter the container terminal why can I still
cd /root
inside the container when it is running as non-root?
EXPECTED: It should give the following error which it is not giving:
cd root/
bash: cd: root/: Permission denied
OTHER VALUES THAT MIGHT BE USEFUL TO DEBUG:
Output of "ls -la" inside the container:
dr-xr-x--- 1 root root 18 Jul 26 2021 root
As you can see the r and x SHOULD BE UNSET for OTHER on root folder
Output of "id" inside the container:
bash-4.2$ id
uid=1000 gid=0(root) groups=0(root),1000
Using a helm chart locally to reproduce the error ->
The same 3 securityContext params when used locally in a simple Go program helm chart yields the desired result.
Deployment.yaml of helm chart:
apiVersion: {{ template "common.capabilities.deployment.apiVersion" . }}
kind: Deployment
metadata:
name: {{ template "fullname" . }}
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "fullname" . }}
template:
metadata:
labels:
app: {{ template "fullname" . }}
spec:
securityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: {{ .Values.service.internalPort }}
livenessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
readinessProbe:
httpGet:
path: /
port: {{ .Values.service.internalPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
Output of ls -la inside the container on local setup:
drwx------ 2 root root 4096 Jan 25 00:00 root
You can always cd into / in the UNIX system as non-root, so you can do it inside your container as well. However, e.g. creating a file there should fail with Permission denied.
Check the following.
# Run a container as non-root
docker run -it --rm --user 7447 busybox sh
# Check that it's possible to cd into '/'
cd /
# Try creating file
touch some-file
touch: some-file: Permission denied
I have the following define definition in helm:
{{- define "svc.envVars" -}}
{{- range .Values.envVars.withSecret }}
- name: {{ .key }}
valueFrom:
secretKeyRef:
name: {{ .secretName }}
key: {{ .secretKey | quote }}
{{- end }}
{{- range .Values.envVars.withoutSecret }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
{{- end }}
and I am going to use it in deployment.yaml
containers:
- name: {{ .Release.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
{{- if .Values.envVars.enabled }}
env:
{{- include "svc.envVars" . | indent 10 }}
{{- end }}
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
in values.yaml it is defined as follows:
envVars:
enabled: false
withSecret: []
withoutSecret: []
then I tried to render:
helm template --debug user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set envVars.enabled=true \
--set envVars.withSecret[0].key=POSTGRES_URL,envVars.withSecret[0].secretName=postgres_name,envVars.withSecret[0].secretKey=postgres_pw \
--set envVars.withSecret[1].key=MYSQL_URL,envVars.withSecret[1].secretName=mysql_name,envVars.withSecret[1].secretKey=mysql_pw \
./svc
it shows me:
zsh: no matches found: envVars.withSecret[0].key=POSTGRES_URL,envVars.withSecret[0].secretName=postgres_name,envVars.withSecret[0].secretKey=postgres_pw
When I set the variables manually in values.yaml:
envVars:
enabled: false
withSecret:
- key: POSTGRES_URL
secretName: postgres_name
secretKey: postgres_pw
- key: MYSQL_URL
secretName: mysql_name
secretKey: mysql_pw
withoutSecret:
- name: NOT_SECRET
value: "Value of not serect"
then render it with:
helm template --debug user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set envVars.enabled=true \
./svc
then it works as expected.
What am I doing wrong?
I had the same issue. Due to the fact, that ´[´ and ´]´ are interpreted by zsh.
You can use noglob to disable the globals. So, ´[´ and ´]´ are not interpreted.
noglob helm template --set lorem[0].ipsum=1337
In your example:
noglob helm template --debug user-svc \
--set image.tag=0.1.0 \
--set image.repository=user-svc \
--set envVars.enabled=true \
--set envVars.withSecret[0].key=POSTGRES_URL,envVars.withSecret[0].secretName=postgres_name,envVars.withSecret[0].secretKey=postgres_pw \
--set envVars.withSecret[1].key=MYSQL_URL,envVars.withSecret[1].secretName=mysql_name,envVars.withSecret[1].secretKey=mysql_pw \
./svc
Sources:
Helm - zsh: no matches found: imagePullSecrets[0].name=regcred¨
i'm getting an following error, when i try to deploy nexus using kubernetes.
Command: kubectl appy -f templates/deployment.yaml
error parsing templates/deployment.yaml: json: line 1: invalid
character '{' looking for beginning of object key string
Did anybody faced this issue?
Please find the below code which i'm trying:
{{- if .Values.localSetup.enabled }}
apiVersion: apps/v1
kind: Deployment
{{- else }}
apiVersion: apps/v1
kind: StatefulSet
{{- end }}
metadata:
labels:
app: nexus
name: nexus
spec:
replicas: 1
selector:
matchLabels:
app: nexus
template:
metadata:
labels:
app: nexus
spec:
{{- if .Values.localSetup.enabled }}
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-pv-claim
- name: nexus-data-backup
persistentVolumeClaim:
claimName: nexus-backup-pv-claim
{{- end }}
containers:
- name: nexus
image: "quay.io/travelaudience/docker-nexus:3.15.2"
imagePullPolicy: Always
env:
- name: INSTALL4J_ADD_VM_PARAMS
value: "-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
resources:
requests:
cpu: 250m
memory: 4800Mi
ports:
- containerPort: {{ .Values.nexus.dockerPort }}
name: nexus-docker-g
- containerPort: {{ .Values.nexus.nexusPort }}
name: nexus-http
volumeMounts:
- mountPath: "/nexus-data"
name: nexus-data
- mountPath: "/nexus-data/backup"
name: nexus-data-backup
{{- if .Values.useProbes.enabled }}
livenessProbe:
httpGet:
path: {{ .Values.nexus.livenessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.livenessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.livenessProbe.failureThreshold }}
{{- if .Values.nexus.livenessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.livenessProbe.timeoutSeconds }}
{{- end }}
readinessProbe:
httpGet:
path: {{ .Values.nexus.readinessProbe.path }}
port: {{ .Values.nexus.nexusPort }}
initialDelaySeconds: {{ .Values.nexus.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.nexus.readinessProbe.periodSeconds }}
failureThreshold: {{ .Values.nexus.readinessProbe.failureThreshold }}
{{- if .Values.nexus.readinessProbe.timeoutSeconds }}
timeoutSeconds: {{ .Values.nexus.readinessProbe.timeoutSeconds }}
{{- end }}
{{- end }}
{{- if .Values.nexusProxy.enabled }}
- name: nexus-proxy
image: "quay.io/travelaudience/docker-nexus-proxy:2.4.0_8u191"
imagePullPolicy: Always
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: "GoogleHC"
- name: CLOUD_IAM_AUTH_ENABLED
value: "false"
- name: BIND_PORT
value: {{ .Values.nexusProxy.targetPort | quote }}
- name: ENFORCE_HTTPS
value: "false"
{{- if .Values.localSetup.enabled }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusLocalDockerhost }}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusLocalHttphost }}
{{- else }}
- name: NEXUS_DOCKER_HOST
value: {{ .Values.nexusProxy.nexusDockerHost}}
- name: NEXUS_HTTP_HOST
value: {{ .Values.nexusProxy.nexusHttpHost }}
{{- end }}
- name: UPSTREAM_DOCKER_PORT
value: {{ .Values.nexus.dockerPort | quote }}
- name: UPSTREAM_HTTP_PORT
value: {{ .Values.nexus.nexusPort | quote }}
- name: UPSTREAM_HOST
value: "localhost"
ports:
- containerPort: {{ .Values.nexusProxy.targetPort }}
name: proxy-port
{{- end }}
{{- if .Values.nexusBackup.enabled }}
- name: nexus-backup
image: "quay.io/travelaudience/docker-nexus-backup:1.4.0"
imagePullPolicy: Always
env:
- name: NEXUS_AUTHORIZATION
value: false
- name: NEXUS_BACKUP_DIRECTORY
value: /nexus-data/backup
- name: NEXUS_DATA_DIRECTORY
value: /nexus-data
- name: NEXUS_LOCAL_HOST_PORT
value: "localhost:8081"
- name: OFFLINE_REPOS
value: "maven-central maven-public maven-releases maven-snapshots"
- name: TARGET_BUCKET
value: "gs://nexus-backup"
- name: GRACE_PERIOD
value: "60"
- name: TRIGGER_FILE
value: .backup
volumeMounts:
- mountPath: /nexus-data
name: nexus-data
- mountPath: /nexus-data/backup
name: nexus-data-backup
terminationGracePeriodSeconds: 10
{{- end }}
{{- if .Values.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: nexus-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
- metadata:
name: nexus-data-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
storageClassName: {{ .Values.persistence.storageClass }}
{{- end }}
Any leads would be appreciated!
Regards
Mani
The template you provided here is the part of helm chart, which can be deployed using helm-cli, not using kubectl apply.
More info on using helm is here.
You can also get the instructions to install nexus using helm in this official stable helm chart.
Hope this helps.