How to create an environment variable in kubernetes container - kubernetes

I am trying to pass an environment variable in kubernetes container.
What have I done so far ?
Create a deployment
kubectl create deployment foo --image=foo:v1
Create a NODEPORT service and expose the port
kubectl expose deployment/foo --type=NodePort --port=9000
See the pods
kubectl get pods
dump the configurations (so to add the environment variable)
kubectl get deployments -o yaml > dev/deployment.yaml
kubectl get svc -o yaml > dev/services.yaml
kubectl get pods -o yaml > dev/pods.yaml
Add env variable to the pods
env:
name: FOO_KEY
value: "Hellooooo"
Delete the svc,pods,deployments
kubectl delete -f dev/ --recursive
Apply the configuration
kubectl apply -f dev/ --recursive
Verify env parameters
kubectl describe pods
Something weird
If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?
Question
Why does the verification step does not show the environment variable ?

As the issue is resolved in the comment section.
If you want to set env to pods I would suggust you to use set sub commend
kubectl set env --help will provide more detail such as list the env and create new one
Examples:
# Update deployment 'registry' with a new environment variable
kubectl set env deployment/registry STORAGE_DIR=/local
# List the environment variables defined on a deployments 'sample-build'
kubectl set env deployment/sample-build --list
Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment.
following thread discuss what-is-the-difference-between-a-pod-and-a-deployment

You can add any number of env vars into your deployment file
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
process.env.MONGO_URI
or you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.MONGO_URI
process.env.JWT_KEY

Related

Odd Kubernetes behaviour in AWS EKS cluster

In an EKS cluster (v1.22.10-eks-84b4fe6) that I manage I've spotted a behavior that I had never seen before (or that I missed completely...) => In a namespace with an application running in created by a public helm chart, if I create a separate new unrelated pod (a simple empty busybox with a sleep command in it) it'll automatically mount some environmental variables always starting with the name of the namespace and as referring to the available services which are related to the helm chart/deployment already in it. I'm not sure I understand this behavior, I've tested this in several other namespaces with helm charts deployed as well and I get the same results (each time with different env vars obviously).
An example in a namespace with this chart installed -> https://github.com/bitnami/charts/tree/master/bitnami/keycloak
testpod.yaml
apiVersion: v1
kind: Pod
metadata:
name: testpod
namespace: keycloak-18
spec:
containers:
- image: busybox
name: testpod
command: ["/bin/sh", "-c"]
args: ["sleep 3600"]
When in the pod:
/ # env
KEYCLOAK_18_METRICS_PORT_8080_TCP_PROTO=tcp
KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_SERVICE_PORT=443
KEYCLOAK_18_METRICS_SERVICE_PORT=8080
KEYCLOAK_18_METRICS_PORT=tcp://10.100.104.11:8080
KEYCLOAK_18_PORT_80_TCP_ADDR=10.100.71.5
HOSTNAME=testpod
SHLVL=2
KEYCLOAK_18_PORT_80_TCP_PORT=80
HOME=/root
KEYCLOAK_18_PORT_80_TCP_PROTO=tcp
KEYCLOAK_18_METRICS_PORT_8080_TCP=tcp://10.100.104.11:8080
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_ADDR=10.100.155.185
KEYCLOAK_18_POSTGRESQL_SERVICE_HOST=10.100.155.185
KEYCLOAK_18_PORT_80_TCP=tcp://10.100.71.5:80
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_PORT=5432
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP_PROTO=tcp
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KEYCLOAK_18_POSTGRESQL_PORT=tcp://10.100.155.185:5432
KEYCLOAK_18_POSTGRESQL_SERVICE_PORT=5432
KEYCLOAK_18_SERVICE_PORT_HTTP=80
KEYCLOAK_18_POSTGRESQL_SERVICE_PORT_TCP_POSTGRESQL=5432
KUBERNETES_PORT_443_TCP_PROTO=tcp
KEYCLOAK_18_POSTGRESQL_PORT_5432_TCP=tcp://10.100.155.185:5432
KEYCLOAK_18_METRICS_SERVICE_PORT_HTTP=8080
KEYCLOAK_18_SERVICE_HOST=10.100.71.5
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
KUBERNETES_SERVICE_HOST=10.100.0.1
PWD=/
KEYCLOAK_18_METRICS_PORT_8080_TCP_ADDR=10.100.104.11
KEYCLOAK_18_METRICS_SERVICE_HOST=10.100.104.11
KEYCLOAK_18_SERVICE_PORT=80
KEYCLOAK_18_PORT=tcp://10.100.71.5:80
KEYCLOAK_18_METRICS_PORT_8080_TCP_PORT=8080
I've looked a bit into this and I've seen this doc https://kubernetes.io/docs/concepts/containers/container-environment/, but it states less variables than I can see myself
I may be behind on some Kubernetes features, does anyone have a clue?
Thanks!
What you are seeing is expected. Asserted from the official documentation:
When a Pod is run on a Node, the kubelet adds a set of environment
variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and
{SVCNAME}_SERVICE_PORT variables, where the Service name is
upper-cased and dashes are converted to underscores.
This behavior is not EKS specific.

Duplicated env variable names in pod definition, what is the precedence rule to determine the final value?

Using Kubernetes 1.19.3, I initialize env variable values using 3 different ways:
env field with explicit key/value in the pod definition
envFrom using configMapRef and secretRef
When a key name is duplicated, as shown in the example below, DUPLIK1 and DUPLIK2 are defined multiple times with different values.
What is the precedence rule that Kubernetes uses to assign the final value to the variable?
# create some test Key/Value configs and Key/Value secrets
kubectl create configmap myconfigmap --from-literal=DUPLIK1=myConfig1 --from-literal=CMKEY1=CMval1 --from-literal=DUPLIK2=FromConfigMap -n mydebugns
kubectl create secret generic mysecret --from-literal=SECRETKEY1=SECval1 --from-literal=SECRETKEY2=SECval2 --from-literal=DUPLIK2=FromSecret -n mydebugns
# create a test pod
cat <<EOF | kubectl apply -n mydebugns -f -
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
image: busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: DUPLIK1
value: "Key/Value defined in field env"
envFrom:
- configMapRef:
name: myconfigmap
- secretRef:
name: mysecret
restartPolicy: Never
EOF
Show environement variables values. The result is deterministic. Deleting resources + recreate always end up with the same result.
kubectl logs pod/pod1 -n mydebugns
CMKEY1=CMval1
DUPLIK1=Key/Value defined in field env
DUPLIK2=FromSecret
SECRETKEY1=SECval1
SECRETKEY2=SECval2
Cleanup test resources
kubectl delete pod/pod1 -n mydebugns
kubectl delete cm/myconfigmap -n mydebugns
kubectl delete secret/mysecret -n mydebugns
From Kubernetes docs:
envVar: List of environment variables to set in the container.
Cannot be updated.
envFrom: List of sources to populate environment variables in the
container. The keys defined within a source must be a C_IDENTIFIER.
All invalid keys will be reported as an event when the container is
starting. When a key exists in multiple sources, the value associated
with the last source will take precedence. Values defined by an Env
with a duplicate key will take precedence. Cannot be updated.
The above link clearly states the env will take precedence over envFrom and cannot be updated.
Also, when a referenced key is present in multiple resources, the value associated with the last source will override all previous values.
Based on the above, the result you are seeing is expected behavior:
DUPLIK1 is added as env field and thus cannot be updated
DUPLIK2 is added as envFrom and so the one from the secret takes precedence as it is defined at the last

Change the secret a Kubernetes deployment expects

I've been having a recurring problem with a deployment for a particular pod, fooserviced, recently. I usually get a CreateContainerConfigError when I update the pod, and the detail given is Error: secrets "fooserviced-envars" not found. I'm not sure when I named the file this poorly but so far the only solution I've found is to re-add the environment variables file using
kubectl create secret generic fooserviced-envars --from-env-file ./fooserviced-envvars.txt
So now, when I do kubectl get secrets I see both fooserviced-envars and fooserviced-envvars. I'd like to change the deployment to use fooserviced-envvars; how would I do this?
You can edit the deployment via kubectl edit deployment deploymentname which will open an editor and you can change the secret there live.
Another way to do this would be to run kubectl get deployment deploymentname -o yaml > deployment.yaml which will give you the yaml file and you can edit it in your editor and kubectl apply the modified yaml.
Make sure that secret is on the same namespace. otherwise you cannot use it
If you want to change deployment, change your kubernetes deployment yaml file . e.g.
env:
- name: POSTGRES_DB_URL
valueFrom:
secretKeyRef:
key: postgres_db_url
name: fooserviced-envars
then kubectl apply your_deployment_file

Can kubectl delete environment variable?

Here I can update the envs through kubectl patch, then is there any method that can delete envs except re-deploy a deployment.yaml?
$ kubectl patch deployment demo-deployment -p '{"spec":{"template":{"spec":{"containers":[{"name": "demo-deployment","env":[{"name":"foo","value":"bar"}]}]}}}}'
deployment.extensions "demo-deployment" patched
Can I delete the env "foo" through command line not using a re-deploy on the whole deployment?
This is coming late but for newcomers, you can use the following kubectl command to remove an existing env variable from a deployment
kubectl set env deployment/DEPLOYMENT_NAME VARIABLE_NAME-
Do not omit the hyphen (-) at the end
If you are fine with redeployment then follow the below steps
Create configmap and include your environment variables
Load env variables from configmap in the deployment
envFrom:
- configMapRef:
name: app-config
If you want to delete env variable then remove those key-value pairs from configmap
It will cause redeployment. You can also delete the pod from corresponding deployment
Consider that containers is an array inside an object. Arrays can only be fetched by their index, as opposed to objects which can be fetched via key value pairs. See reference here. So there is a workaround for using index.
Here you have env that are placed into the container:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
- name: DSADASD
value: asdsad
Here you have a command to remove the anv using index:
kubectl patch deployments asd --type=json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/env/1"}]
And the result:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
This will still however restart your pod.
Hope that helps!

Kubectl apply does not update pods or deployments

I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.