Update config map object from service app - kubernetes

I have my microservice and ConfigMap deployed in cluster.
I have a need to update the value of variable defined in ConfigMap object on the fly. I can re-start the microservice programmatically (like re-create the service pod). The main thing is to keep the ConfigMap object in cluster and only update the value in it.
My current idea is to define env variables in an external file:
key1: foo
key2: bar
and in ConfigMap manifest mount the file:
spec:
containers:
image: ...
volumeMounts:
- mountPath: /clusters-config
name: config-volume
volumes:
- configMap:
name: my-env-file
name: config-volume
I wonder if using this approach, what is the main pitfalls/cons I should consider?
Is there an better option/solution if not using the volume mounted ConfigMap but keep variables inside ConfigMap manifest?

Related

Is secret mounted as file is editable from application code in Kubernetes deployment

I am mounting db secrets as a file in my Kubernetes container. Db secrets will get updated after the password expiry time. I am using polling mechanism to check if Db secrets has been reset to updated value. Is it possible to change mounted secret inside file.
is secret mounted as file is editable from application code in kubernetes
The file which gets loaded into the container will be loaded in readonly format, so loaded file can't be edited from inside the container. But secret can be edited from either updating the secret or copying the file into different location within the container.
I'm not sure how you did it. Putting the yaml format of pod configuration would help more.
for example if you use hostPath to mount a file inside the container, every time you change the source file, you see the changes inside the container.
for example
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: busybox
name: test-container
command: ["/bin/sh", "-c", "sleep 36000"]
volumeMounts:
- mountPath: /etc/db_pass
name: password-volume
volumes:
- name: password-volume
hostPath:
path: /var/lib/original_password
type: File

How to set kubernetes secret key name when using --from-file other than filename?

Is there a way to set a kubernetes secret key name when using --from-file other than the filename?
I have a bunch of different configuration files that I use as secrets.json within my containers. However, to organize my files, none of them are named secrets.json on my host. For example secrets.dev.json or secrets.test.json. My apps only know to read in secrets.json.
When I create a secret with kubectl create secret generic my-app-secrets --from-file=secrets.dev.json, this results in the key name being secrets.dev.json and not secrets.json.
I'm mounting in my secret contents as a file (this is a carry-over from migrating from Docker swarm).
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
spec:
volumes:
- name: my-secret
secret:
secretName: my-app-secrets
containers:
- name: my-app
volumeMounts:
- name: my-secret
mountPath: "/run/secrets/secrets.json"
subPath: "secrets.json"
Because secrets.json doesn't exist as a key because it used the filename (secrets.dev.json), it ends up getting turned into a directory instead. I end up getting this mount path: /run/secrets/secrets.json/secrets.dev.json.
I'd like to be able to set the key name to secrets.json instead of using the filename of secrets.dev.json.
You can specify key name [--from-file=[key=]source]
kubectl create secret generic my-app-secrets --from-file=secrets.json=secrets.dev.json
Here, secrets.json is key name and secrets.dev.json is source

Do pods share the filesystems, similar to how they share same network namespace?

I have created a pod with two containers. I know that different containers in a pod share same network namespace (i.e.,same IP and port space) and can also share a storage volume between them via configmaps. My question is do the pods also share same filesystem. For instance, in my case I have one container 'C1' that generates a dynamic file every 10 min in /var/targets.yml and I want the the other container 'C2' to read this file and perform its own independent action.
Is there a way to do this, may be some workaround via configmaps? or do I have to access these file via networking since each container have their own IP(But this may not be a good idea when it comes to POD restarts). Any suggestions or references please?
You can use an emptyDir for this:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: gcr.io/google_containers/test-webserver
name: generating-container
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: gcr.io/google_containers/test-webserver
name: consuming-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
But be aware, that the data isn't persistent during container recreations.

How to set default value for Kubernetes ConfigMap (or does Helm overwrite ConfigMap values?)

Use case:
I want to be able to re-run a job from where the first job left off. I am using Helm to deploy into Kubernetes.
I have the idea of saving the state of the first job in a ConfigMap. The ConfigMap yaml defining the ConfigMap is packaged up with the job and both are deployed at the same time with Helm.
apiVersion: v1
kind: ConfigMap
metadata:
name: NameOfMyConfigMap
data:
someKey: someValue
MY_STATE: state <---- See below as to whether this should be included or not
The job is run with an ENV variable set from the ConfigMap:
env:
- name: MY_STATE
valueFrom:
configMapKeyRef:
name: NameOfMyConfigMap
key: MY_STATE
The job runs a script that looks to see if $MY_STATE is set and if it is not set then the job is being run for the first time, otherwise the job closes down the already running first job, saves the first job's state into the MY_STATE ConfigMap variable and launches the job again using the saved state.
If I don't declare the MY_STATE key in the initial ConfigMap definition then the first run of the job will fail, as the ENV definition above cannot find the ConfigMap variable.
If I do declare the value (MY_STATE: "") in the ConfigMap definition, then the first deployment will work. However, if I re-deploy the job with helm upgrade then does the value I enter in the definition not overwrite an existing value in the existing ConfigMap?
What is the best method of storing state in between runs of the same job?
Have you tried using volumes? In this case it should not be overwritten when using helm upgrade.
Could an example like this work? (From
https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ)
apiVersion: batch/v1
kind: Job
metadata:
name: keystore-configmap-job
spec:
template:
metadata:
name: keystore-configmap
spec:
containers:
- name: keystore
image: ubuntu
volumeMounts:
- name: keystore-configmap-volume
mountPath: /config-base64
command: [ "sh", "-c", "cat /config-base64/keystore.jks | base64 --decode | sha256sum" ]
restartPolicy: Never
volumes:
- name: keystore-configmap-volume
configMap:
name: keystore-configmap

how to pass a configuration file thought yaml on kubernetes to create new replication controller

i am trying to pass a configuration file(which is located on master) on nginx container at the time of replication controller creation through kubernetes.. ex. as we are using ADD command in Dockerfile...
There isn't a way to dynamically add file to a pod specification when instantiating it in Kubernetes.
Here are a couple of alternatives (that may solve your problem):
Build the configuration file into your container (using the docker ADD command). This has the advantage that it works in the way which you are already familiar but the disadvantage that you can no longer parameterize your container without rebuilding it.
Use environment variables instead of a configuration file. This may require some refactoring of your code (or creating a side-car container to turn environment variables into the configuration file that your application expects).
Put the configuration file into a volume. Mount this volume into your pod and read the configuration file from the volume.
Use a secret. This isn't the intended use for secrets, but secrets manifest themselves as files inside your container, so you can base64 encode your configuration file, store it as a secret in the apiserver, and then point your application to the location of the secret file that is created inside your pod.
I believe you can also download config during container initialization.
See example below, you may download config instead index.html but I would not use it for sensetive info like passwords.
apiVersion: v1
kind: Pod
metadata:
name: init-demo
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
# These containers are run during pod initialization
initContainers:
- name: install
image: busybox
command:
- wget
- "-O"
- "/work-dir/index.html"
- http://kubernetes.io
volumeMounts:
- name: workdir
mountPath: "/work-dir"
dnsPolicy: Default
volumes:
- name: workdir
emptyDir: {}