Helm Exporting environments variables from env file located on volumeMounts - kubernetes

I have problem, I am trying to set env vars from file located on volumeMount (initContainer pulls the file from remote location puts on path /mnt/volume/ - custom solution) during container start, but it doesn't work.
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- /bin/sh
- -c
- "echo 'export $(cat /mnt/volume/.env | xargs) && java -jar application/target/application-0.0.1-SNAPSHOT.jar"
and this doesn't work, than I tried simple export command.
command:
- /bin/sh
- -c
- "export ENV_TMP=tmp && java -jar application/target/application-0.0.1-SNAPSHOT.jar"
and it doesn't work either, also I tried to set appending .bashrc file, but still it doesn't work.
I am not sure how to handle this.
Thanks
EDIT: typo

By default there is no shared storage between the pod containers, so first step is checking if you have a mounted volume in the two pods, one of the best solutions is an emptyDir.
Then you can write a variable in the init container and read it from the main container:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
initContainers:
- name: init-container
image: alpine
command:
- /bin/sh
- -c
- 'echo "test_value" > /mnt/volume/var.txt'
volumeMounts:
- mountPath: /mnt/volume
name: shared-storage
containers:
- image: alpine
name: test-container
command:
- /bin/sh
- -c
- 'READ_VAR=$(cat /mnt/volume/var.txt) && echo "main_container: ${READ_VAR}"'
volumeMounts:
- mountPath: /mnt/volume
name: shared-storage
volumes:
- name: shared-storage
emptyDir: {}
Here is the log:
$ kubectl logs test-pd
main_container: test_value

I have checked this, and on the end I found a typo.
Anyway, thanks for suggestions.

Related

Secret mount on Kubernetes container fails with security context (run as non-root user) added

I share a part of the manifest where I added security context added. If I remove the security context, it works fine. I try to use non-root user to up the container. Not sure, what I did wrong below
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
ports:
- name: http
containerPort: 8010
protocol: TCP
volumeMounts:
- name: mount-jmx-secret
mountPath: "etc/hello-world"
volumes:
- name: mount-jmx-secret
secret:
secretName: jmxsecret
defaultMode: 0600
I do not know what mistake I made. It worked fine after couple of reinstalls of helm charts.
Changes I made, Added a new user to docker file
RUN useradd -u 8877 <user_name>(ram)
USER ram

Kubernetes copying jars into a pod and restart

I have a Kubernetes problem where I need to copy 2 jars (each jar > 1Mb) into a pod after it is deployed. So ideally the solution is we cannot use configMap (> 1Mb) but we need to use "wget" in "initcontainer" and download the jars.
so below is my kubernetes-template configuration which i have modified. The original one is available at https://github.com/dremio/dremio-cloud-tools/blob/master/charts/dremio/templates/dremio-executor.yaml
{{ if not .Values.DremioAdmin }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dremio-executor
spec:
serviceName: "dremio-cluster-pod"
replicas: {{.Values.executor.count}}
podManagementPolicy: "Parallel"
revisionHistoryLimit: 1
selector:
matchLabels:
app: dremio-executor
template:
metadata:
labels:
app: dremio-executor
role: dremio-cluster-pod
annotations:
dremio-configmap/checksum: {{ (.Files.Glob "config/*").AsConfig | sha256sum }}
spec:
terminationGracePeriodSeconds: 5
{{- if .Values.nodeSelector }}
nodeSelector:
{{- range $key, $value := .Values.nodeSelector }}
{{ $key }}: {{ $value }}
{{- end }}
{{- end }}
containers:
- name: dremio-executor
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
resources:
requests:
memory: {{.Values.executor.memory}}M
cpu: {{.Values.executor.cpu}}
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
##################### START added this section #####################
- name: dremio-connector
mountPath: /opt/dremio/jars
#################### END added this section ##########################
- name: dremio-config
mountPath: /opt/dremio/conf
env:
- name: DREMIO_MAX_HEAP_MEMORY_SIZE_MB
value: "{{ template "HeapMemory" .Values.executor.memory }}"
- name: DREMIO_MAX_DIRECT_MEMORY_SIZE_MB
value: "{{ template "DirectMemory" .Values.executor.memory }}"
- name: DREMIO_JAVA_EXTRA_OPTS
value: >-
-Dzookeeper=zk-hs:2181
-Dservices.coordinator.enabled=false
{{- if .Values.extraStartParams }}
{{ .Values.extraStartParams }}
{{- end }}
command: ["/opt/dremio/bin/dremio"]
args:
- "start-fg"
ports:
- containerPort: 45678
name: server
initContainers:
################ START added this section ######################
- name: installjars
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-connector
mountPath: /opt/dremio/jars
command: ["/bin/sh","-c"]
args: ["wget --no-check-certificate -O /dir/connector.jar https://<some nexus repo URL>/connector.jar; sleep 10;"]
################ END added this section ###############
- name: wait-for-zk
image: busybox
command: ["sh", "-c", "until ping -c 1 -W 1 zk-hs > /dev/null; do echo waiting for zookeeper host; sleep 2; done;"]
# since we're mounting a separate volume, reset permission to
# dremio uid/gid
- name: chown-data-directory
image: {{.Values.image}}:{{.Values.imageTag}}
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 0
volumeMounts:
- name: dremio-executor-volume
mountPath: /opt/dremio/data
command: ["chown"]
args:
- "dremio:dremio"
- "/opt/dremio/data"
volumes:
- name: dremio-config
configMap:
name: dremio-config
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.imagePullSecrets }}
{{- end}}
#################### START added this section ########################
- name: dremio-connector
emptyDir: {}
#################### END added this section ########################
volumeClaimTemplates:
- metadata:
name: dremio-executor-volume
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.storageClass }}
storageClassName: {{ .Values.storageClass }}
{{- end }}
resources:
requests:
storage: {{.Values.executor.volumeSize}}
{{ end }}
So the above is NOT working and I don't see any jars being downloaded once I "exec" into the pod. I don't understand what is wrong with the above. however do note that inside the pod if i run the same wget command it does download the jar which baffles me. So the URL is working, readwrite of directory is no problem but still jar is not downloaded ???
If you can remove the need for Wget altogether it would make life easier...
Option 1
Using your own docker image will save some pain if thats an option
Dockerfile
# docker build -f Dockerfile -t ghcr.io/yourOrg/projectId/dockerImageName:0.0.1 .
# docker push ghcr.io/yourOrg/projectId/dockerImageName:0.0.1
FROM nginx:1.19.10-alpine
# Use local copies of config
COPY files/some1.jar /dir/
COPY files/some2.jar /dir/
Files will be ready in the container, no need for cryptic commands in your pod definition that will make little sense. Alternatively if you need to download the files you could copy a script to do that work into the Docker image instead and run that on startup via the docker directive CMD.
Option 2
Alternatively, you could do a two stage deployment...
Create a persistent volume
mount the volume to a pod (use busybox as a base?) that will run for enough time for the files to copy across from your local machine (or for them to be downloaded if you continue to use Wget)
kubectl cp the files you need to the (Retained) PersistentVolume
Now mount the PV to your pod's container(s) so the files are readily available when the pod fires up.
Your approch seems right.
Another solution could be to include the jar on the Docker image but I think it's not possible right ?
You could just use an emptyDir instead of a VolumeClaim.
Last one, I would have download the jar before waiting for ZooKeeper to gain some time.

Helm lifecycle commands in deployment

I have deployment.yaml template.
I have AWS EKS 1.8 and same kubectl.
I'm using Helm 3.3.4.
When I tried to deploy same template directly thru kubectl apply -f deployment.yaml,is everything good, init containers and main container in pod works fine.
But if I tried to start deployment thru Helm I got this error:
OCI runtime exec failed: exec failed: container_linux.go:349: starting
container process caused "process_linux.go:101: executing setns
process caused \"exit status 1\"": unknown\r\n"
kubectl describe pods osad-apidoc-6b74c9bcf9-tjnrh
Looks like I have missed something in annotations or I'm using wrong syntaxes in command description.
Some not important parameters are omitted in this example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "apidoc.fullname" . }}
labels:
{{- include "apidoc.labels" . | nindent 4 }}
spec:
template:
metadata:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
I have tried invoke simple command like:
lifecycle:
postStart:
exec:
command: ['/bin/sh', '-c', 'printenv']
Unfortunately I got same error.
If I will delete this Lifecycle everything works fine thru Helm.
But I need invoke this commands, I can't omit this steps.
Also I checked helm template thru lint, looks good:
Original deployment.yaml before move to Helm:
apiVersion: apps/v1
kind: Deployment
metadata:
name: apidoc
namespace: apidoc
labels:
app: apidoc
stage: dev
version: v-1
spec:
selector:
matchLabels:
app: apidoc
stage: dev
replicas: 1
template:
metadata:
labels:
app: apidoc
stage: dev
version: v-1
spec:
initContainers:
- name: git-clone
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
git clone --single-branch --branch k8s
git#github.com:examplerepo/apidoc.git -qv
- name: copy-data
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/helper:latest'
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: repos
mountPath: /var/repos
workingDir: /var/repos
command:
- sh
- '-c'
- >-
if cp -r apidoc/* /var/www/app/; then echo 'Success!!!' && exit 0;
else echo 'Failed !!!' && exit 1;fi;
containers:
- name: apache2
image: '123123123123.dkr.ecr.us-east-1.amazonaws.com/apache2:2.2'
tty: true
volumeMounts:
- name: web-data
mountPath: /var/www/app
- name: configfiles
mountPath: /etc/apache2/sites-available/000-default.conf
subPath: 000-default.conf
ports:
- name: http
protocol: TCP
containerPort: 80
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
echo 'ServerName 127.0.0.1' >> /etc/apache2/apache2.conf
a2enmod rewrite
a2enmod headers
a2enmod ssl
a2dissite default
a2ensite 000-default
a2enmod rewrite
service apache2 start
volumes:
- name: web-data
emptyDir: {}
- name: repos
emptyDir: {}
- name: configfiles
configMap:
name: apidoc-config

How can I check whether K8s volume was mounted correctly?

I'm testing out whether I can mount data from S3 using initContainer. What I intended and expected was same volume being mounted to both initContainer and Container. Data from S3 gets downloaded using InitContainer to mountPath called /s3-data, and as the Container is run after the initContainer, it can read from the path the volume was mounted to.
However, the Container doesn't show me any logs, and just says 'stream closed'. The initContainer shows logs that data were successfully downloaded from S3.
What am I doing wrong? Thanks in advance.
apiVersion: batch/v1
kind: Job
metadata:
name: train-job
spec:
template:
spec:
initContainers:
- name: data-download
image: <My AWS-CLI Image>
command: ["/bin/sh", "-c"]
args:
- aws s3 cp s3://<Kubeflow Bucket>/kubeflowdata.tar.gz /s3-data
volumeMounts:
- mountPath: /s3-data
name: s3-data
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef: {key: AWS_ACCESS_KEY_ID, name: aws-secret}
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef: {key: AWS_SECRET_ACCESS_KEY, name: aws-secret}
containers:
- name: check-proper-data-mount
image: <My Image>
command: ["/bin/sh", "-c"]
args:
- cd /s3-data
- echo "Just s3-data dir"
- ls
- echo "After making a sample file"
- touch sample.txt
- ls
volumeMounts:
- mountPath: /s3-data
name: s3-data
volumes:
- name: s3-data
emptyDir: {}
restartPolicy: OnFailure
backoffLimit: 6
You can try like the below mentioned the argument part
---
apiVersion: v1
kind: Pod
metadata:
labels:
purpose: demonstrate-command
name: command-demo
spec:
containers:
-
args:
- cd /s3-data;
echo "Just s3-data dir";
ls;
echo "After making a sample file";
touch sample.txt;
ls;
command:
- /bin/sh
- -c
image: "<My Image>"
name: containername
for reference:
How to set multiple commands in one yaml file with Kubernetes?

Initcontainer not initializing in kubernetes

I´m trying to retrieve some code from gitlab in my yaml.
Unfortunatly the job fails to initalize the pod. I have checked all the logs and it fails with the following message:
0 container "filter-human-genome-and-create-mapping-stats" in pod "create-git-container-5lhp2" is waiting to start: PodInitializing
Here is the yaml file:
apiVersion: batch/v1
kind: Job
metadata:
name: create-git-container
namespace: diag
spec:
template:
spec:
initContainers:
- name: bash-script-git-downloader
image: alpine/git
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
command: ["/bin/sh","-c"]
args: ["git", "clone", "https://.......#gitlab.com/scripts.git" ]
containers:
- name: filter-human-genome-and-create-mapping-stats
image: myimage
env:
- name: OUTPUT
value: "/output"
command: ["ls"]
volumeMounts:
- mountPath: /bash_scripts
name: bash-script-source
- mountPath: /output
name: output
volumes:
- name: bash-script-source
emptyDir: {}
- name: output
persistentVolumeClaim:
claimName: output
restartPolicy: Never
If you use bash -c, it expects only one argument. So you have to pass your args[] as one argument. There are ways to do it:
command: ["/bin/sh","-c"]
args: ["git clone https://.......#gitlab.com/scripts.git"]
or
command: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
args: ["/bin/sh","-c", "git clone https://.......#gitlab.com/scripts.git"]
or
command:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git
or
args:
- /bin/sh
- -c
- |
git clone https://.......#gitlab.com/scripts.git