can i use a configmap created from an init container in the pod - kubernetes

I am trying to "pass" a value from the init container to a container. Since values in a configmap are shared across the namespace, I figured I can use it for this purpose. Here is my job.yaml (with faked-out info):
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['kubectl', 'create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url']
restartPolicy: Never
backoffLimit: 0
This does not seem to work (EDIT: although the statements following this edit note may still be correct, this is not working because kubectl is not a recognizable command in the busybox image), and I am assuming that the pod can only read values from a configmap created BEFORE the pod is created. Has anyone else come across the difficulty of passing values between containers, and what did you do to solve this?
Should I deploy the configmap in another pod and wait to deploy this one until the configmap exists?
(I know I can write files to a volume, but I'd rather not go that route unless it's absolutely necessary, since it essentially means our docker images must be coupled to an environment where some specific files exist)

You can create an EmptyDir volume, and mount this volume onto both containers. Unlike persistent volume, EmptyDir has no portability issue.
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
initContainers:
- name: artifactory-snapshot
image: busybox
command: ['/bin/sh', '-c', 'cp x /tmp/artifact/x']
volumeMounts:
- name: tmp
mountPath: /tmp/artifact
restartPolicy: Never
volumes:
- name: tmp
emptyDir: {}
backoffLimit: 0

If for various reasons, you don't want to use share volume. And you want to create a configmap or a secret, here is a solution.
First you need to use a docker image which contains kubectl : gcr.io/cloud-builders/kubectl:latest for example. (docker image which contains kubectl manage by Google).
Then this (init)container needs enough rights to create resource on Kubernetes cluster. Ok by default, kubernetes inject a token of default service account named : "default" in container, but I prefer to make more explicit, then add this line :
...
initContainers:
- # Already true by default but if use it, prefer to make it explicit
automountServiceAccountToken: true
name: artifactory-snapshot
And add "edit" role to "default" service account:
kubectl create rolebinding default-edit-rb --clusterrole=edit --serviceaccount=default:myapp --namespace=default
Then complete example :
apiVersion: batch/v1
kind: Job
metadata:
name: installer-test
spec:
template:
spec:
initContainers:
- # Already true by default but if use it, prefer to make it explicit.
automountServiceAccountToken: true
name: artifactory-snapshot
# You need to use docker image which contains kubectl
image: gcr.io/cloud-builders/kubectl:latest
command:
- sh
- -c
# the "--dry-run -o yaml | kubectl apply -f -" is to make command idempotent
- kubectl create configmap test-config --from-literal=artifactorySnapshotUrl=http://artifactory.com/some/url --dry-run -o yaml | kubectl apply -f -
containers:
- name: installer-test
image: installer-test:latest
env:
- name: clusterId
value: "some_cluster_id"
- name: in_artifactoryUrl
valueFrom:
configMapKeyRef:
name: test-config
key: artifactorySnapshotUrl

First of all, kubectl is a binary. It was downloaded in your machine before you could use the command. But, In your POD, the kubectl binary doesn't exist. So, you can't use kubectl command from a busybox image.
Furthermore, kubectl uses some credential that is saved in your machine (probably in ~/.kube path). So, If you try to use kubectl from inside an image, this will fail because of missing credentials.
For your scenario, I will suggest the same as #ccshih, use volume sharing.
Here is the official doc about volume sharing between init-container and container.
The yaml that is used here is ,
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}
Here init-containers saves a file in the volume and later the file was available in inside the container. Try the tutorial by yourself for better understanding.

Related

Kubernetes: Is it possible to add an existing deployment as a sidecar to another deployment?

I have a helm chart that I was going to deploy and I would like to use the deployment it creates as a sidecar for another deployment. Is this possible using the Rancher's GUI or is it something that I can directly configure in the YAML?
TL;DR: No
Not really possible. You have to specify multiple containers in the same pod/deployment manifest to create sidecars. Like this:
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
Alternatively, you can achieve this by using Admission Controllers, the same way Istio does, but this is way outside of the scope of this question.

Mount / copy a file from host to Pod in kubernetes using minikube

I'm writing a kubectl configuration to start an image and copy a file to the container.
I need the file Config.yaml in the / so /Config.yaml needs to be a valid file.
I need that file in the Pod before it starts, so kubectl cp does not work.
I have the Config2.yaml in my local folder, and I'm starting the pod like:
kubectl apply -f pod.yml
Here follows my pod.yml file.
apiVersion: v1
kind: Pod
metadata:
name: python
spec:
containers:
- name: python
image: mypython
volumeMounts:
- name: config
mountPath: /Config.yaml
volumes:
- name: config
hostPath:
path: Config2.yaml
type: File
If I try to use like this it also fails:
- name: config-yaml
mountPath: /
subPath: Config.yaml
#readOnly: true
If you just need the information contained in the config.yaml to be present in the pod from the time it is created, use a configMap instead.
Create a configMap that contains all the data stored in the config.yaml and mount that into the correct path in the pod. This would not work for read/write, but works wonderfully for read-only data
You can try postStart lifecycle handler here to validate the file before pod starts.
Please refer here
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
volumeMounts:
- mountPath: /config.yaml
name: config
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "apt update && apt install yamllint -y && yamllint /config.yaml"]
volumes:
- name: config
hostPath:
path: /tmp/config.yaml
type: File
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
If config.yaml is invalid. Pod won't start.

Write to Secret file in pod

I define a Secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
config.yaml: |-
apiUrl: "https://my.api.com/api/v1"
username: Administrator
password: NewPasswdTest11
And then creating volume mount in Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-webapp-test
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
replicas: 2
selector:
matchLabels:
name: k8s-webapp-test
version: 1.0.4
template:
metadata:
labels:
name: k8s-webapp-test
version: 1.0.4
spec:
nodeSelector:
kubernetes.io/os: windows
volumes:
- name: secret-volume
secret:
secretName: string-data-secret
containers:
- name: k8s-webapp-test
image: dockerstore/k8s-webapp-test:1.0.4
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/secrets"
readOnly: false
So, after the deployment, I have 2 pods with volume mounts in C:\secrets (I do use Windows nodes). When I try to edit config.yaml that is located in C:\secrets folder, I get following error:
Access to the path 'c:\secrets\config.yaml' is denied.
Although I marked file as readOnly false I cannot write to it. How can I modify the file?
As you can see here it is not possible by intention:
Secret, configMap, downwardAPI and projected volumes will be mounted as read-only volumes. Applications that attempt to write to these volumes will receive read-only filesystem errors. Previously, applications were allowed to make changes to these volumes, but those changes were reverted at an arbitrary interval by the system. Applications should be re-configured to write derived files to another location
You can look into using an init container which maps the secret and then copies it to the desired location where you might be able to modify it.
As an alternative to the init container you might also use a container lifecycle hook i.e. a PostStart-hook which executes immediately after a container is created.
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
cp -r /secrets ~/secrets;
You can create secrets from within a Pod but it seems you need to utilize the Kubernetes REST API to do so:
https://kubernetes.io/docs/concepts/overview/kubernetes-api/

How to allow a Kubernetes Job access to a file on host

I've been though the Kubernetes documentation thoroughly but am still having problems interacting with a file on the host filesystem with an application running inside a K8 job launched pod. This happens with even the simplest utility so I have included an stripped down example of my yaml config. The local file, 'hello.txt', referenced here does exist in /tmp on the host (ie. outside the Kubernetes environment) and I have even chmod 777'd it. I've also tried different places in the hosts filesystem than /tmp.
The pod that is launched by the Kubernetes Job terminates with Status=Error and generates the log ls: /testing/hello.txt: No such file or directory
Because I ultimately want to use this programmatically as part of a much more sophisticated workflow it really needs to be a Job not a Deployment. I hope that is possible. My current config file which I am launching with kubectl just for testing is:
apiVersion: batch/v1
kind: Job
metadata:
name: kio
namespace: kmlflow
spec:
# ttlSecondsAfterFinished: 5
template:
spec:
containers:
- name: kio-ingester
image: busybox
volumeMounts:
- name: test-volume
mountPath: /testing
imagePullPolicy: IfNotPresent
command: ["ls"]
args: ["-l", "/testing/hello.txt"]
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /tmp
# this field is optional
# type: Directory
restartPolicy: Never
backoffLimit: 4
Thanks in advance for any assistance.
Looks like when the volume is mounted , the existing data can't be accessed.
You will need to make use of init container to pre-populate the data in the volume.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: busybox
command: ["echo","-n","{'address':'10.0.1.192:2379/db'}", ">","/data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
hostPath: {}
Reference:
https://medium.com/#jmarhee/using-initcontainers-to-pre-populate-volume-data-in-kubernetes-99f628cd4519

Running a shell script to initialize pods in kubernetes (initializing my db cluster)

I need to be able to run a shell script (my script is for initializing my db cluster) to initialize my pods in Kubernetes,
I don't want to create my script inside my dockerfile because I get my image directly from the web so I don't want to touch it.
So I want to know if there is a way to get my script in to one of my volumes so I can execute it like that:
spec:
containers:
- name: command-demo-container
image: debian
command: ["./init.sh"]
restartPolicy: OnFailure
It depends what exactly does your init script do. But the InitContainers should be helpful in such cases. Init containers are run before the main application container is started and can do some preparation work such as create configuration files etc.
You would still need your own Docker image, but it doesn't have to be the same image as the database one.
I finally decided to take the approach of creating a config file with the script we want to run and then call this configMap from inside the volume.
this is a short explanation:
In my pod.yaml file there is a VolumeMount called "/pgconf" which is the directory that the docker image reads any SQL script that you put there and run it when the pod is starting.
And inside Volumes I will put the configMap name (postgres-init-script-configmap) which is the name of the config defined inside the configmap.yaml file.
There is no need to create the configMap using kubernetes,
The pod will take the configuration from the configMap file as long as you place it in the same directory as the pod.yaml .
my POD yaml file:
apiVersion: v1
kind: Pod
metadata:
name: "{{.Values.container.name.primary}}"
labels:
name: "{{.Values.container.name.primary}}"
spec:
securityContext:
fsGroup: 26
restartPolicy: {{default "Always" .Values.restartPolicy}}
containers:
- name: {{.Values.container.name.primary}}
image: "{{.Values.image.repository}}/{{.Values.image.container}}:{{.Values.image.tag}}"
ports:
- containerPort: {{.Values.container.port}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: primary
resources:
requests:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
volumeMounts:
- mountPath: /pgconf
name: init-script
readOnly: true
volumes:
- name: init-script
configMap:
name: postgres-init-script-configmap
my configmap.yaml (Which contains the SQL script that will initial the DB):
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init-script-configmap
data:
setup.sql: |-
CREATE USER david WITH PASSWORD 'david';