Use Helm variables in entrypoint script - kubernetes

I'm struggling to use Helm variables within my entry script for my container, when deploying to AKS. Running locally work perfectly fine, as I'm specifying them as docker -e arguement. How do I pass arguments, either specified as helm variables and/or overwrited when issuing the helm install command?
Entry script start.sh
#!/bin/bash
GH_OWNER=$GH_OWNER
GH_REPOSITORY=$GH_REPOSITORY
GH_TOKEN=$GH_TOKEN
echo "variables"
echo $GH_TOKEN
echo $GH_OWNER
echo $GH_REPOSITORY
echo ${GH_TOKEN}
echo ${GH_OWNER}
echo ${GH_REPOSITORY}
env
Docker file
# base image
FROM ubuntu:20.04
#input GitHub runner version argument
ARG RUNNER_VERSION
ENV DEBIAN_FRONTEND=noninteractive
# update the base packages + add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
# install the packages and dependencies along with jq so we can parse JSON (add additional packages as necessary)
RUN apt-get install -y --no-install-recommends \
curl nodejs wget unzip vim git azure-cli jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev python3-pip
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
# add over the start.sh script
ADD scripts/start.sh start.sh
# make the script executable
RUN chmod +x start.sh
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/start.sh"]
Helm values
replicaCount: 1
image:
repository: somecreg.azurecr.io/ghrunner
pullPolicy: Always
# tag: latest
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
GH_TOKEN: "SET"
GH_OWNER: "SET"
GH_REPOSITORY: "SET"
serviceAccount:
create: true
annotations: {}
name: ""
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
resources: {}
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hostedrunner.fullname" . }}
labels:
{{- include "hostedrunner.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "hostedrunner.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "hostedrunner.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "hostedrunner.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
Console output for helm install
Helm command (tried both with set and set-string and values to get substituted correctly)
helm install --set-string env.GH_TOKEN="$env:pat" --set-string env.GH_OWNER="SomeOwner" --set-string env.GH_REPOSITORY="aks-hostedrunner" $deploymentName .helm/ --debug
I thought the helm variables might be passed as environment variables, but that's not the case. Any input is greatly appreciated

You can add and update your deployment template with
env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val }}
{{- end }}
so it will add the env block into your deployment section and your shell script when will run inside the docker, it will be able to access the Environment variables
Deployment env example
containers:
- name: envar-demo-container
image: <Your Docker image>
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Ref : https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container
If you will implement above one those variables will get set as Environment variables and Docker will be able to access it(shell script inside the container).
You can also use the configmap and secret of Kubernetes to set values at Env level.

Related

Kubernetes cluster unable to pull images from DigitalOcean Registry

My DigitalOcean kubernetes cluster is unable to pull images from the DigitalOcean registry. I get the following error message:
Failed to pull image "registry.digitalocean.com/XXXX/php:1.1.39": rpc error: code = Unknown desc = failed to pull and unpack image
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to resolve reference
"registry.digitalocean.com/XXXXXXX/php:1.1.39": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized
I have added the kubernetes cluster using DigitalOcean Container Registry Integration, which shows there successfully both on the registry and the settings for the kubernetes cluster.
I can confirm the above address `registry.digitalocean.com/XXXX/php:1.1.39 matches the one in the registry. I wonder if I’m misunderstanding how the token / login integration works with the registry, but I’m under the impression that this was a “one click” thing and that the cluster would automatically get the connection to the registry after that.
I have tried by logging helm into a registry before pushing, but this did not work (and I wouldn't really expect it to, the cluster should be pulling the image).
It's not completely clear to me how the image pull secrets are supposed to be used.
My helm deployment chart is basically the default for API Platform:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api-platform.fullname" . }}
labels:
{{- include "api-platform.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "api-platform.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "api-platform.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "api-platform.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}-caddy
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.caddy.image.repository }}:{{ .Values.caddy.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.caddy.image.pullPolicy }}
env:
- name: SERVER_NAME
value: :80
- name: PWA_UPSTREAM
value: {{ include "api-platform.fullname" . }}-pwa:3000
- name: MERCURE_PUBLISHER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-publisher-jwt-key
- name: MERCURE_SUBSCRIBER_JWT_KEY
valueFrom:
secretKeyRef:
name: {{ include "api-platform.fullname" . }}
key: mercure-subscriber-jwt-key
ports:
- name: http
containerPort: 80
protocol: TCP
- name: admin
containerPort: 2019
protocol: TCP
volumeMounts:
- mountPath: /var/run/php
name: php-socket
#livenessProbe:
# httpGet:
# path: /
# port: admin
#readinessProbe:
# httpGet:
# path: /
# port: admin
resources:
{{- toYaml .Values.resources | nindent 12 }}
- name: {{ .Chart.Name }}-php
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.php.image.repository }}:{{ .Values.php.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.php.image.pullPolicy }}
env:
{{ include "api-platform.env" . | nindent 12 }}
volumeMounts:
- mountPath: /var/run/php
name: php-socket
readinessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
livenessProbe:
exec:
command:
- docker-healthcheck
initialDelaySeconds: 120
periodSeconds: 3
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: php-socket
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
How do I authorize the kubernetes cluster to pull from the registry? Is this a helm thing or a kubernetes only thing?
Thanks!
The problem that you have is that you do not have an image pull secret for your cluster to use to pull from the registry.
You will need to add this to give your cluster a way to authorize its requests to the cluster.
Using the DigitalOcean kubernetes Integration for Container Registry
Digital ocean provides a way to add image pull secrets to a kubernetes cluster in your account. You can link the registry to the cluster in the settings of the registry. Under "DigitalOcean Kuberentes Integration" select edit, then select the cluster you want to link the registry to.
This action adds an image pull secret to all namespaces within the cluster and will be used by default (unless you specify otherwise).
The issue was that API Platform automatically has a default value for imagePullSecrets in the helm chart, which is
imagePullSecrets: []
in values.yaml
So this seems to override kubernetes from accessing imagePullSecrets in the way that I expected. The solution was to add the name of the imagePullSecrets directly to the helm deployment command, like this:
--set "imagePullSecrets[0].name=registry-secret-name-goes-here"
You can view the name of your secret using kubectl get secrets like this:
kubectl get secrets
And the output should look something like this:
NAME TYPE DATA AGE
default-token-lz2ck kubernetes.io/service-account-token 3 38d
registry-secret-name-goes-here kubernetes.io/dockerconfigjson 1 2d16h

Kubernetes Issue

I have a Micro service (on Node.js)
I am creating a docker image for it and pushing it to my local registry running at
localhost:5001
While deploying this micro service using helm
helm upgrade --install --wait --set env=dev --set image.tag=localhost:5001/user-service userservice-api ./build/helm --namespace dev --create-namespace --kube-context http://localhost:5001
I get
Error: Kubernetes cluster unreachable: context "http://localhost:5001"
does not exist
How do i find out the issue/resolve it?
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "chart.fullname" . }}
labels:
{{- include "chart.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "chart.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "chart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 4006
protocol: TCP
env:
- name: ENV
value: "{{ .Values.env }}"
readinessProbe:
httpGet:
path: /health
port: 4006
initialDelaySeconds: 15
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 4006
initialDelaySeconds: 15
periodSeconds: 10
values.yaml
replicaCount: 1
image:
repository: localhost:5001/user-service
Additional Information
Can someone please help me with the issue?
In Dockerfile
RUN npm ci
had to be changed to
In Dockerfile
RUN npm ci --force
resolved issue for me.

Non Root User Helm & AKS

I'm attempting to connect and run a pod in an AKS cluster (v1.19.6) as a non-root user with Helm (v3.5.2), and getting a crashback loop with the error I have no name!. The docker image and service runs locally without an issue as the correct user at runtime.
After helm create mychart I set up my security in the values.yaml as :
podSecurityContext:
runAsNonRoot: true
runAsUser: 123
securityContext:
# capabilities:
# drop:
# - ALL
#readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 123
The deployment.yaml is below. I've not modified anything else other than the parameters to connect to my AKS cluster:
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
My Dockerfile ends with
USER 123
EXPOSE 8080
CMD [ "sh", "-c", "./blah; bash"]
Am I correct that this is most likely the issue? How do I go about resolving the problem? Supporting documentation would be very helpful everything I'm finding is outdated.
I created a startup script to start the service with the user declared. Not sure if this is the K8s methodology but it worked. Will leave it unanswered in the event someone has a better solution.

Using single helm chart for deployment of multiple services

I am new to helm and kubernetes.
My current requirement is to use setup multiple services using a common helm chart.
Here is the scenario.
I have a common docker image for all of the services
for each of the services there are different commands to run. In total there are more than 40 services.
Example
pipenv run python serviceA.py
pipenv run python serviceB.py
pipenv run python serviceC.py
and so on...
Current state of helm chart I have is
demo-helm
|- Chart.yaml
|- templates
|- deployment.yaml
|- _helpers.tpl
|- values
|- values-serviceA.yaml
|- values-serviceB.yaml
|- values-serviceC.yaml
and so on ...
Now, since I want to use the same helm chart and deploy multiple services. How should I do it?
I used following command helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml but it only does a deployment for values file provided at the end.
Here is my deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "helm.fullname" . }}
labels:
{{- include "helm.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "helm.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "helm.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: {{- toYaml .Values.command |nindent 12}}
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: secrets
secret:
secretName: sample-application
defaultMode: 0400
Update.
Since my requirement has been updated to add all the values for services in a single file I am able to do it by following.
deployment.yaml
{{- range $service, $val := .Values.services }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $service }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $service }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
and values.yaml
services:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
Now I want to know will the following configuration work with the changes I am posting below for both values.yaml
services:
region:
#Services for region1
serviceA-region1:
nameOverride: "serviceA-region1"
fullnameOverride: "serviceA-region1"
command: ["bash", "-c", "python serviceAregion1.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
region:2
#Services for region2
serviceA-region2:
nameOverride: "serviceA-region2"
fullnameOverride: "serviceA-region2"
command: ["bash", "-c", "python serviceAregion2.py"]
secrets: vader-search-region2
resources: {}
replicaCount: 5
and deployment.yaml
{{- range $region, $val := .Values.services.region }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ $region }}-{{ .nameOverride }}
labels:
app: {{ .nameOverride }}
spec:
replicas: {{ .replicaCount }}
selector:
matchLabels:
app: {{ .nameOverride }}
template:
metadata:
labels:
app: {{ .nameOverride }}
spec:
imagePullSecrets:
- name: aws-ecr
containers:
- name: {{ $region }}-{{ .nameOverride }}
image: "image-latest-v3"
imagePullPolicy: IfNotPresent
command: {{- toYaml .command |nindent 12}}
resources:
{{- toYaml .resources | nindent 12 }}
volumeMounts:
- name: secrets
mountPath: "/usr/src/app/config.ini"
subPath: config.ini
volumes:
- name: secrets
secret:
secretName: {{ .secrets }}
defaultMode: 0400
{{- end }}
I can recommend you try a helmfile-based approach. I prefer a 3-file approach.
What you'll need :
helmfile-init.yaml: contains YAML instructions that you might need to use for creating and configuring namespaces etc.
helmfile-backend.yaml: contains all the releases you need to deploy (service1, service2 ...)
helmfile.yaml: paths to the above-mentioned (helmfile-init, helmfile-backend YAML files)
a deployment spec file (app_name.json): a specification file that contains all the information regarding the release (release-name, namespace, helm chart version, application-version, etc.)
Helmfile has made my life a little bit breezy when deploying multiple applications. I will edit this answer with a couple of examples in a few minutes.
Meanwhile, you can refer to the official docs here or the Blue Books if you have Github access on your machine.
helm install demo-helm . -f values/values-serviceA.yaml -f values-serviceB.yaml
When you did like this, serviceB values will override serviceA values. You need to run the command separately with different release name as follow :
helm install demo-helm-A . -f values/values-serviceA.yaml
helm install demo-helm-B . -f values/values-serviceB.yaml
Is there any other approach like I run everything in a loop since the
only difference in each of my values.yaml file is the command section.
So, I can include command in the same file like this > command: > -
["bash", "-c", "python serviceA.py"] > - ["bash", "-c", "python
serviceB.py"] > - ["bash", "-c", "python serviceC.py"] – whoami 20
hours ago
Yes, you can write a fairly simple bash script which will run everything in a loop:
for i in {A..Z}; do sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml | helm install demo-helm-$i . -f - ; done
Instead of command: ["bash", "-c", "python serviceAregion1.py"] in your values/values-service-template.yaml file just put command: {{COMMAND}} as it will be substituted with the exact command with every iteration of the loop.
As to {A..Z} put there whatever you need in your case. It might be {A..K} if you only have services named from A to K or {1..40} if instead of letters you prefer numeric values.
The following sed command will substitute {{COMMAND}} fragment in your original values/values-service-template.yaml with the actual command e.g. ["bash", "-c", "python serviceA.py"], ["bash", "-c", "python serviceB.py"] and so on.
sed "s/{{COMMAND}}/[\"bash\", \"-c\", \"python service$i.py\"]/g" values/values-service-template.yaml
Then it will be piped ( | symbol ) to:
helm install demo-helm-$i . -f -
where demo-helm-$i will be expanded e.g. to demo-helm-A but the key element here is - character which means: read from standard input instead of reading from file, which is normally expected after -f flag.

acumos AI clio installation fails with "error converting YAML to JSON"

I have been trying to install clio release.
VM :
ubuntu 18.04
16 Cores
32 GB RAM
500 GB Storage.
Command :
bash /home/ubuntu/system-integration/tools/aio_k8s_deployer/aio_k8s_deployer.sh all acai-server ubuntu generic
All most all steps of installation have completed successfully but during "setup-lum", I got below error.
Error:
YAML parse error on lum-helm/templates/deployment.yaml:
error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Workaround :
I was able to get away with these error(tested via helm install --dry-run ) by
a. removing "resource, affinity and tolerant blocks
b. replace "Release.Name" with actual release value( e.g. license-clio-configmap)
but when I run the full installation command, those helms charts are updated again.
Full error :
...
helm install -f kubernetes/values.yaml --name license-clio --namespace default --debug ./kubernetes/license-usage-manager/lum-helm
[debug] Created tunnel using local port: '46109'
[debug] SERVER: "127.0.0.1:46109"
[debug] Original chart version: ""
[debug] CHART PATH: /deploy/system-integration/AIO/lum/kubernetes/license-usage-manager/lum-helm
YAML parse error on lum-helm/templates/deployment.yaml: error converting YAML to JSON: yaml: line 36: mapping values are not allowed in this context
Yaml of deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "lum-helm.fullname" . }}
labels:
app: {{ template "lum-helm.name" . }}
chart: {{ template "lum-helm.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "lum-helm.name" . }}
release: {{ .Release.Name }}
spec:
initContainers:
- name: wait-for-db
image: busybox:1.28
command:
- 'sh'
- '-c'
- >
until nc -z -w 2 {{ .Release.Name }}-postgresql {{ .Values.postgresql.servicePort }} && echo postgresql ok;
do sleep 2;
done
containers:
- name: {{ .Chart.Name }}
image: nexus3.acumos.org:10002/acumos/lum-server:default
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Release.Name }}-postgresql
key: postgresql-password
- name: NODE
volumeMounts:
- name: config-volume
mountPath: /opt/app/lum/etc/config.json
subPath: lum-config.json
ports:
- name: http
containerPort: 2080
protocol: TCP
livenessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
readinessProbe:
httpGet:
path: '/api/healthcheck'
port: http
initialDelaySeconds: 60
periodSeconds: 10
failureThreshold: 10
resources:
{{ toYaml .Values.resources | indent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}
volumes:
- name: config-volume
configMap:
name: {{ .Release.Name }}-configmap
This error was resolved as per Error trying to install Acumos Clio using AIO
I provided an imagetag:1.3.2 in my actual value.yaml and lum deployment was successful
in acumos setup there are two copied of setup-lum.sh and values.yaml
actual :
~/system-integration/AIO/lum/kubernetes/value.yaml
and run time copy
~/aio_k8s_deployer/deploy/system-integration/AIO/lum/kubernetes/value.yaml
I found this workaround:
Uncommented the IMAGE-TAG line in the values.yaml file
Commented the following lines in the setup-lum.sh file (these were already executed at the first run and in this way I skipped the overwriting problem)
rm -frd kubernetes/license-usage-manager
git clone "https://gerrit.acumos.org/r/license-usage-manager" \
kubernetes/license-usage-manager