Terraform kubernetes_config_map --from-env-file - kubernetes

I am creating a kubernetes configMap using '--from-env-file' option to store the file contents as environment variables.
kubectl create configmap env --from-env-file=env.properties -n namespace
When I create a terraform resource as below, the created configMap contains a file, not environment variables.
resource "kubernetes_config_map" "env" {
metadata {
name = "env-config"
namespace = var.namespace
}
data = {
"env.properties" = "${file("${path.module}/env.properties")}"
}
}
How to create configMap with file content as environment variables using terraform-kubernetes-provider resource ?

If env.properties looks like this:
$ cat env.properties
enemies=aliens
lives=3
allowed="true"
Then kubectl create configmap env --from-env-file=env.properties -n namespace would result in something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: env
namespace: namespace
data:
allowed: '"true"'
enemies: aliens
lives: "3"
But what you're doing with Terraform would result in something more like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: env
namespace: namespace
data:
env.properties: |
enemies=aliens
lives=3
allowed="true"
Based on the Terraform docs it appears that what you're looking for, i.e. some native support for --from-env-file behaviour within the Terraform provider, is not possible.
The ConfigMap format that you get doing it the Terraform way could still be useful, you might just have to change how you're pulling the data from the ConfigMap into your pods/deployments. If you can share more details, and even a simplified/sanitized example of your pods/deployments where you're consuming the config map, it may be possible to describe how to change those to make use of the different style of ConfigMap. See more here.

Related

Using sensitive environment variables in Kubernetes configMaps

I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.

How to edit configmap in kubernetes and override the values from a different yaml file?

I want to edit the configmap and replace the values. But it should be done using a different YAML in I ll specify overriding values as part of that file.
I was trying using kubectl edit cm -f replace.yaml but this didn't work so i want to know the structure in which the new file should be.
apiVersion: v1
kind: ConfigMap
metadata:
name: int-change-change-management-service-configurations
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
NADC_USER: "omc"
NADC_PASSWORD: "hello"
NADC_PORT: "991"
plan_compare_wait_time: "1"
plan_prefix: ""
ingress_ip: "http://10.12.13.14"
Now lets us assume NADC_IP should be changed and So I would like to know how should be structure of the YAML file and using which command it can be served?
The override taking place should only be during helm test for example when i run
helm test <release-name>?
kubectl replace -f replace.yaml
If you have a configmap in place like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
should_retain_native_dn: "False"
NADC_IP: "10.11.12.13"
and you want to change the value of NADC_IP create a manifest file like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
should_retain_native_dn: "False"
NADC_IP: "12.34.56.78" # the new IP
and run kubectl replace -f replace.yaml
To update variable in configmap you need to take two steps:
First, update the value of variable:
kubectl create configmap <name_of_configmap> --from-literal=<var_name>=<new_value> -o yaml --dry-run | kubectl replace -f -
So in your case it will looks like this:
kubectl create configmap int-change-change-management-service-configurations --from-literal=NADC_IP=<new_value> -o yaml --dry-run | kubectl replace -f -
Second step, restart the pod:
kubectl delete pod <pod_name>
App will use new value from now. Let me know, if it works for you.
kubectl get cm {configmap name} -o=yaml --export > filename.yaml
You can try this it will give you yaml format
kubectl get configmap
int-change-change-management-service-configurations -o yaml
You can copy the content and replace it inside new yaml file and apply the changes
EDIT : 1
If you want to edit over terminal you can run
kubectl edit configmap {configmap name}
It will use vim editor and you can replace value from terminal using edit command.
EDIT : 2
kubectl get cm {configmap name} -o=yaml --export > filename.yaml

What does key mean when creating a kubernetes configmap from a file

I see here a syntax like this:
kubectl create cm configmap4 --from-file=special=config4.txt
I did not find a description of what repetition of = and the special means here.
Kubernetes documentation here only denotes one time usage of = after --from-file while creating configmaps in kubectl.
It appears from generating the YAML that this middle key mean all the keys that are being loaded from the file to be nested inside the mentioned key (special keyword in the question example).
It appears like this:
apiVersion: v1
data:
special: |
var3=val3
var4=val4
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-01T08:20:15Z"
name: configmap4
namespace: default
resourceVersion: "123320"
selfLink: /api/v1/namespaces/default/configmaps/configmap4
uid: 1582b155-8446-11e9-87b7-0800277f619d
kubectl create configmap my-config --from-file=path/to/bar
When creating a configmap based on a file, the key will default to the basename of the file, and the value will default to the file content. If the basename is an invalid key, you may specify an alternate key.
Create a new configmap named my-config with specified keys instead of file basenames on disk
kubectl create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt

Spinnaker - Reference ConfigMap versioned value inside manifest

I'm deploying a single yaml file containing two manifests using the Spinnaker Kubernetes Provider V2 (Manifest deployer). Inside the Deployment I have a custom annotation that references the ConfigMap:
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
data:
foo: bar
---
# Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deployment
spec:
template:
metadata:
annotations:
my-config-map-reference: my-config-map
[...]
Upon deployment, Spinnaker applies versioning to the ConfigMap, which is then deployed as my-config-map-v000.
I'd like to be able to retrieve the full name inside my custom annotation, but since Spinnaker replaces automatically the configMap references with the appropriate versioned values only in specific entrypoints ( https://github.com/spinnaker/clouddriver/blob/master/clouddriver-kubernetes/src/main/groovy/com/netflix/spinnaker/clouddriver/kubernetes/v2/artifact/ArtifactReplacerFactory.java ) in this case this does not work.
According to Spinnaker documentation ( https://www.spinnaker.io/reference/artifacts/in-kubernetes-v2/#why-not-pipeline-expressions ) I may be able to write a Pipeline Expression to retrieve the full name, but I wasn't able to do so.
How can I set the full ConfigMap name inside the annotation?
Spinnaker can inject artifacts from the currently executing pipeline into your manifests as they are deployed
Refer to this guide for the instructions on how to Binding artifacts in manifests
However, as mentioned here, there's NO resource mapping for annotation, so it should be user-supplied only as a parameter for your manifest.
In the future, certain relationships between resources will be recorded and annotated by Spinnaker

If i change my ConfigMap key value after deployment, does deployment of that application which is using configMap values need to be restarted?

i have a sample nodejs application which uses an envVar environment variable, i have deployed this on kubernetes cluster. I am passing the env variable through config map.
once deployed and when pods is all running, if i change my config map with new value. Should deployment of my nodejs application need to be redone after this?
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '12345' # initial value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
after updating the configmap.yaml
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '56789' # changed value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
When you mount the keys from the ConfigMap as environment variables, you would need to restart your pod for the changes to take effect.
When you mount it as volume into you system, the files in the volume will be updated automatically. The update is not immediate, there is some TTL configured in the kubelet before it checks for changes / does the update. But it is normally quite quick. However it would still depend on your application how it loads the data from the file - whether it will be able to update its self on the fly when the files change or whether these data were loaded only once at startup.