Here I can update the envs through kubectl patch, then is there any method that can delete envs except re-deploy a deployment.yaml?
$ kubectl patch deployment demo-deployment -p '{"spec":{"template":{"spec":{"containers":[{"name": "demo-deployment","env":[{"name":"foo","value":"bar"}]}]}}}}'
deployment.extensions "demo-deployment" patched
Can I delete the env "foo" through command line not using a re-deploy on the whole deployment?
This is coming late but for newcomers, you can use the following kubectl command to remove an existing env variable from a deployment
kubectl set env deployment/DEPLOYMENT_NAME VARIABLE_NAME-
Do not omit the hyphen (-) at the end
If you are fine with redeployment then follow the below steps
Create configmap and include your environment variables
Load env variables from configmap in the deployment
envFrom:
- configMapRef:
name: app-config
If you want to delete env variable then remove those key-value pairs from configmap
It will cause redeployment. You can also delete the pod from corresponding deployment
Consider that containers is an array inside an object. Arrays can only be fetched by their index, as opposed to objects which can be fetched via key value pairs. See reference here. So there is a workaround for using index.
Here you have env that are placed into the container:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
- name: DSADASD
value: asdsad
Here you have a command to remove the anv using index:
kubectl patch deployments asd --type=json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/env/1"}]
And the result:
spec:
containers:
- env:
- name: DEMO_GREETING
value: Hello from the environment
This will still however restart your pod.
Hope that helps!
Related
I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
Using Kubernetes 1.19.3, I initialize env variable values using 3 different ways:
env field with explicit key/value in the pod definition
envFrom using configMapRef and secretRef
When a key name is duplicated, as shown in the example below, DUPLIK1 and DUPLIK2 are defined multiple times with different values.
What is the precedence rule that Kubernetes uses to assign the final value to the variable?
# create some test Key/Value configs and Key/Value secrets
kubectl create configmap myconfigmap --from-literal=DUPLIK1=myConfig1 --from-literal=CMKEY1=CMval1 --from-literal=DUPLIK2=FromConfigMap -n mydebugns
kubectl create secret generic mysecret --from-literal=SECRETKEY1=SECval1 --from-literal=SECRETKEY2=SECval2 --from-literal=DUPLIK2=FromSecret -n mydebugns
# create a test pod
cat <<EOF | kubectl apply -n mydebugns -f -
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
image: busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: DUPLIK1
value: "Key/Value defined in field env"
envFrom:
- configMapRef:
name: myconfigmap
- secretRef:
name: mysecret
restartPolicy: Never
EOF
Show environement variables values. The result is deterministic. Deleting resources + recreate always end up with the same result.
kubectl logs pod/pod1 -n mydebugns
CMKEY1=CMval1
DUPLIK1=Key/Value defined in field env
DUPLIK2=FromSecret
SECRETKEY1=SECval1
SECRETKEY2=SECval2
Cleanup test resources
kubectl delete pod/pod1 -n mydebugns
kubectl delete cm/myconfigmap -n mydebugns
kubectl delete secret/mysecret -n mydebugns
From Kubernetes docs:
envVar: List of environment variables to set in the container.
Cannot be updated.
envFrom: List of sources to populate environment variables in the
container. The keys defined within a source must be a C_IDENTIFIER.
All invalid keys will be reported as an event when the container is
starting. When a key exists in multiple sources, the value associated
with the last source will take precedence. Values defined by an Env
with a duplicate key will take precedence. Cannot be updated.
The above link clearly states the env will take precedence over envFrom and cannot be updated.
Also, when a referenced key is present in multiple resources, the value associated with the last source will override all previous values.
Based on the above, the result you are seeing is expected behavior:
DUPLIK1 is added as env field and thus cannot be updated
DUPLIK2 is added as envFrom and so the one from the secret takes precedence as it is defined at the last
I am trying to pass an environment variable in kubernetes container.
What have I done so far ?
Create a deployment
kubectl create deployment foo --image=foo:v1
Create a NODEPORT service and expose the port
kubectl expose deployment/foo --type=NodePort --port=9000
See the pods
kubectl get pods
dump the configurations (so to add the environment variable)
kubectl get deployments -o yaml > dev/deployment.yaml
kubectl get svc -o yaml > dev/services.yaml
kubectl get pods -o yaml > dev/pods.yaml
Add env variable to the pods
env:
name: FOO_KEY
value: "Hellooooo"
Delete the svc,pods,deployments
kubectl delete -f dev/ --recursive
Apply the configuration
kubectl apply -f dev/ --recursive
Verify env parameters
kubectl describe pods
Something weird
If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?
Question
Why does the verification step does not show the environment variable ?
As the issue is resolved in the comment section.
If you want to set env to pods I would suggust you to use set sub commend
kubectl set env --help will provide more detail such as list the env and create new one
Examples:
# Update deployment 'registry' with a new environment variable
kubectl set env deployment/registry STORAGE_DIR=/local
# List the environment variables defined on a deployments 'sample-build'
kubectl set env deployment/sample-build --list
Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment.
following thread discuss what-is-the-difference-between-a-pod-and-a-deployment
You can add any number of env vars into your deployment file
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
process.env.MONGO_URI
or you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.MONGO_URI
process.env.JWT_KEY
I'm new at kubernetes, and Im wondering the best way to inject values to ConfigMap.
for now, I defined Deployment object which takes the relevant values from ConfigMap file. I wish to use the same .yml file for my production and staging environments. so only the values in the configMap will be changed, while the file itself will be the same.
Is there any way to do it built-in in kubernetes, without using configuration management tools (like Ansible, puppet, etc.)?
You can find the links to the quoted text in the end of the answer.
A good practice when writing applications is to separate application code from configuration. We want to enable application authors to easily employ this pattern within Kubernetes. While the Secrets API allows separating information like credentials and keys from an application, no object existed in the past for ordinary, non-secret configuration. In Kubernetes 1.2, we’ve added a new API resource called ConfigMap to handle this type of configuration data.
Besides, Secrets data will be stored in a base64 encoded form, which is also suitable for binary data such as keys, whereas ConfigMaps data will be stored in plain text format, which is fine for text files.
The ConfigMap API is simple conceptually. From a data perspective, the ConfigMap type is just a set of key-value pairs.
There are several ways you can create config maps:
Using list of values in the command line
$ kubectl create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm
Using a file on the disk as a source of data
$ kubectl create configmap game-config-2 --from-file=docs/user-guide/configmap/kubectl/game.properties --from-file=docs/user-guide/configmap/kubectl/ui.properties
$ kubectl create configmap game-config-3 --from-file=game-special-key=docs/user-guide/configmap/kubectl/game.properties
Using directory with files as a source of data
$ kubectl create configmap game-config --from-file=configure-pod-container/configmap/kubectl/
Combining all three previously mentioned methods
There are several ways to consume a ConfigMap data in Pods
Use values in ConfigMap as environment variables
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "echo $(SPECIAL_LEVEL_KEY)" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: SPECIAL_LEVEL
Use data in ConfigMap as files on the volume
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# ConfigMap containing the files
name: special-config
Only changes in ConfigMaps that are consumed in a volume will be visible inside the running pod. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet.
Pod that contains in specification any references to non-existent ConfigMap or Secrets won't start.
Consider to read official documentation and other good articles for even more details:
Configuration management with Containers
Configure a Pod to Use a ConfigMap
Using ConfigMap
Kubernetes ConfigMaps and Secrets
Managing Pod configuration using ConfigMaps and Secrets in Kubernetes
You also create configmap
kubectl create configmap special-config \
--from-env-file=configure-pod-container/configmap/kubectl/game-env-file.properties
and access it in the container
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
If you're thinking of ansible then I suspect you'll want to look at helm for this. I don't think it is a concern that kubernetes itself would address but helm is a kubernetes project.
If I understand correctly you've got a configmap yaml file and you want to deploy it with one set of values for staging and one for production.
A natural way to do this would be to keep two copies of the file with '-staging' and '-prod' appended on the name and have your CI choose the one for the environment it is deploying to. Or you could have a shell script in your CI that does a sed/replace on the particular values you want to switch for the environment.
Using helm you could pass in command-line parameters at deploy time or via a parameter-file (the values.yaml).
I'm using a CI to update my kubernetes cluster whenever there's an update to an image. Whenever the image is pushed and has the latest tag it kubectl apply's the existing deployment but nothing gets updated.
this is what runs
$ kubectl apply --record --filename /tmp/deployment.yaml
My goal is when the apply is ran that a rolling deployment gets executed.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us.gcr.io/joule-eed41/api:latest
imagePullPolicy: Always
ports:
- containerPort: 1337
args:
- /bin/sh
- -c
- echo running api;npm start
env:
- name: NAMESPACE
valueFrom:
configMapKeyRef:
name: config
key: NAMESPACE
As others suggested, have a specific tag.
Set new image using following command
kubectl set image deployment/deployment_name deployment_name=image_name:image_tag
In your case it would be
kubectl set image deployment/api api=us.gcr.io/joule-eed41/api:0.1
As #ksholla20 mentionedm using kubectl set image is a good option for many (most?) cases.
But if you can't change the image tag consider using:
1 ) kubectl rollout restart deployment/<name>
(reference).
2 ) kubectl patch deployment <name> -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$CURRENT_BUILD_HASH_OR_DATE\"}}}}}}" (reference)
(*) Notice that the patch command allow you to change specific properties in the deployment (or any other object chosen) like the label selector and the pod label or other properties like the value of the NAMESPACE environment variable in your example.
I've run into the same problem and none of the solutions posted so far will help. The solution is easy, but not easy to see or predict. The applied yaml will generate both a deployment and a replicaset the first time it's run. Unfortunately, applying changes to the manifest likely only replaces the replicaset, while the deployment will remain unchanged. This is a problem because some changes need to happen at the deployment level, but the old deployment hangs around. To have best results, delete the deployment and ensure all previous deployments and replicasets are deleted. Then apply the updated manifest.