I set my secret like this:
$ kubectl create secret generic aws-region VAL=eu-west-1 \
> -o yaml --dry-run | kubectl replace -f -
secret "aws-region" replaced
Seems to be set:
kubectl get secret | ack region
aws-region Opaque 0 20m
An I try to read it like this:
76 - name: AWS_REGION
77 valueFrom:
78 secretKeyRef:
79 name: aws-region
80 key: VAL
But that gives a CreateContainerConfigError when I run kubectl apply -f service.yml
What am I doing wrong?
Since you're only showing us a small part of service.yaml it's impossible to tell where the error comes from but I can confirm the following works (using a test pod I created here):
$ kubectl create secret generic aws-region --from-literal=VAL=eu-west-1
$ kubectl apply -f pod.yaml
$ kubectl describe po/envfromsecret
Name: envfromsecret
Namespace: default
...
Environment:
AWS_REGION: <set to the key 'VAL' in secret 'aws-region'> Optional: false
UPDATE: I now noticed that the DATA column in the output of your kubectl get secret command is actually 0, that is, it's empty. Consider using the form I used above (with --from-literal=) to create the secret.
Related
I'm trying to install Redis on Kubernetes environment with Bitnami Redis HELM Chart. I want to use a defined password rather than randomly generated one. But i'm getting error below when i want to connect to redis master or replicas with redis-cli.
I have no name!#redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Warning: AUTH failed
I created a Kubernetes secret like this.
---
apiVersion: v1
kind: Secret
metadata:
name: redis-secret
namespace: redis
type: Opaque
data:
redis-password: YWRtaW4xMjM0Cg==
And in values.yaml file i updated auth spec like below.
auth:
enabled: true
sentinel: false
existingSecret: "redis-secret"
existingSecretPasswordKey: "redis-password"
usePasswordFiles: false
If i don't define existingSecret field and use randomly generated password then i can connect without an issue. I also tried AUTH admin1234 after Warning: AUTH failed error but it didn't work either.
You can achieve it in much simpler way i.e. by running:
$ helm install my-release \
--set auth.password="admin1234" \
bitnami/redis
This will update your "my-release-redis" secret, so when you run:
$ kubectl get secrets my-release-redis -o yaml
you'll see it contains your password, already base64-encoded:
apiVersion: v1
data:
redis-password: YWRtaW4xMjM0Cg==
kind: Secret
...
In order to get your password, you need to run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
This will set and export REDIS_PASSWORD environment variable containing your redis password.
And then you may run your redis-client pod:
kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.4-debian-10-r13 --command -- sleep infinity
which will set REDIS_PASSWORD environment variable within your redis-client pod by assigning to it the value of REDIS_PASSWORD set locally in the previous step.
The issue was about how i encoded password with echo command. There was a newline character at the end of my password. I tried with printf command rather than echo and it created a different result.
printf admin1234 | base64
Using Kubernetes 1.19.3, I initialize env variable values using 3 different ways:
env field with explicit key/value in the pod definition
envFrom using configMapRef and secretRef
When a key name is duplicated, as shown in the example below, DUPLIK1 and DUPLIK2 are defined multiple times with different values.
What is the precedence rule that Kubernetes uses to assign the final value to the variable?
# create some test Key/Value configs and Key/Value secrets
kubectl create configmap myconfigmap --from-literal=DUPLIK1=myConfig1 --from-literal=CMKEY1=CMval1 --from-literal=DUPLIK2=FromConfigMap -n mydebugns
kubectl create secret generic mysecret --from-literal=SECRETKEY1=SECval1 --from-literal=SECRETKEY2=SECval2 --from-literal=DUPLIK2=FromSecret -n mydebugns
# create a test pod
cat <<EOF | kubectl apply -n mydebugns -f -
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
image: busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: DUPLIK1
value: "Key/Value defined in field env"
envFrom:
- configMapRef:
name: myconfigmap
- secretRef:
name: mysecret
restartPolicy: Never
EOF
Show environement variables values. The result is deterministic. Deleting resources + recreate always end up with the same result.
kubectl logs pod/pod1 -n mydebugns
CMKEY1=CMval1
DUPLIK1=Key/Value defined in field env
DUPLIK2=FromSecret
SECRETKEY1=SECval1
SECRETKEY2=SECval2
Cleanup test resources
kubectl delete pod/pod1 -n mydebugns
kubectl delete cm/myconfigmap -n mydebugns
kubectl delete secret/mysecret -n mydebugns
From Kubernetes docs:
envVar: List of environment variables to set in the container.
Cannot be updated.
envFrom: List of sources to populate environment variables in the
container. The keys defined within a source must be a C_IDENTIFIER.
All invalid keys will be reported as an event when the container is
starting. When a key exists in multiple sources, the value associated
with the last source will take precedence. Values defined by an Env
with a duplicate key will take precedence. Cannot be updated.
The above link clearly states the env will take precedence over envFrom and cannot be updated.
Also, when a referenced key is present in multiple resources, the value associated with the last source will override all previous values.
Based on the above, the result you are seeing is expected behavior:
DUPLIK1 is added as env field and thus cannot be updated
DUPLIK2 is added as envFrom and so the one from the secret takes precedence as it is defined at the last
I have following configmaps
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
application.properties: |+
key1: value1
key2: value2
keyN: valueN
Configmaps is mounted to pod and works fine.
My requirement is to programmatically replace the value for some of the keys.
I can run shell/python script and I can run any kubectl command.
You can use kubectl patch command to update K8s resources.
kubectl patch configmap/test \
--type=json \
'-p=[{"op": "replace","path": "/data/key1", "value": "test1"}]'
An important point to note as mentioned by Henry is that the application also has to re-read the properties after they have been changed.
You can use Bash script to dynamically replace some keys and values in your ConfigMaps.
I've created simple bash script to illustrate how it may works on my kubeadm cluster v1.20:
#!/bin/bash
keyName="key1"
value="value100"
read -p 'Enter ConfigMap name: ' configmapName
if kubectl get cm ${configmapName} 1> /dev/null 2>&1; then
echo "ConfigMap name to modify: ${configmapName}"
else
echo "ERROR: bad ConfigMap name"
exit 1
fi
kubectl patch cm ${configmapName} -p "{\"data\":{\"${keyName}\":\"${value}\"}}"
In the above example you need to pass ConfigMap name and set what you want to modify.
In addition you may want to pass keyName and value values as command line arguments in similar way to configmapName value.
You can see an example of how the above script works:
root#kmaster:~# ./replaceValue.sh
Enter ConfigMap name: test
ConfigMap name to modify: test
configmap/test patched
root#kmaster:~# kubectl describe cm test
Name: test
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
application.properties:
----
key1:
----
value100
key2:
----
value2
keyN:
----
valueN
Events: <none>
root#kmaster:~#
Note: If you want to use kubectl replace instead of kubectl patch, you can use below command (e.g. sourceValue="key1: value1" and destinationValue="key1: value100")
kubectl get cm ${configmapName} -o yaml | sed "s/${sourceValue}/${destinationValue}/" | kubectl replace -f -
I am trying to pass an environment variable in kubernetes container.
What have I done so far ?
Create a deployment
kubectl create deployment foo --image=foo:v1
Create a NODEPORT service and expose the port
kubectl expose deployment/foo --type=NodePort --port=9000
See the pods
kubectl get pods
dump the configurations (so to add the environment variable)
kubectl get deployments -o yaml > dev/deployment.yaml
kubectl get svc -o yaml > dev/services.yaml
kubectl get pods -o yaml > dev/pods.yaml
Add env variable to the pods
env:
name: FOO_KEY
value: "Hellooooo"
Delete the svc,pods,deployments
kubectl delete -f dev/ --recursive
Apply the configuration
kubectl apply -f dev/ --recursive
Verify env parameters
kubectl describe pods
Something weird
If I manually changed the meta information of the pod yaml and hard code the name of the pod. It gets the env variable. However, this time two pods come up one with the hard coded name and other with the hash with it. For example, if the name I hardcoded was "foo", two pods namely foo and foo-12314faf (example) would appear in "kubectl get pods". Can you explain why ?
Question
Why does the verification step does not show the environment variable ?
As the issue is resolved in the comment section.
If you want to set env to pods I would suggust you to use set sub commend
kubectl set env --help will provide more detail such as list the env and create new one
Examples:
# Update deployment 'registry' with a new environment variable
kubectl set env deployment/registry STORAGE_DIR=/local
# List the environment variables defined on a deployments 'sample-build'
kubectl set env deployment/sample-build --list
Deployment enables declarative updates for Pods and ReplicaSets. Pods are not typically directly launched on a cluster. Instead, pods are usually managed by replicaSet which is managed by deployment.
following thread discuss what-is-the-difference-between-a-pod-and-a-deployment
You can add any number of env vars into your deployment file
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
process.env.MONGO_URI
or you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
spec:
containers:
- name: auth
image: lord/auth
env:
- name: MONGO_URI
value: "mongodb://auth-mongo-srv:27017/auth"
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.MONGO_URI
process.env.JWT_KEY
I want to edit a configmap from aws-auth during a vagrant deployment to give my vagrant user access to the EKS cluster. I need to add a snippet into the existing aws-auth configmap. How do i do this programmatically?
If you do a kubectl edit -n kube-system configmap/aws-auth you get
apiVersion: v1
data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::123:role/nodegroup-abc123
username: system:node:{{EC2PrivateDNSName}}
kind: ConfigMap
metadata:
creationTimestamp: "2019-05-30T03:00:18Z"
name: aws-auth
namespace: kube-system
resourceVersion: "19055217"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: 0000-0000-0000
i need to enter this bit in there somehow.
mapUsers: |
- userarn: arn:aws:iam::123:user/sergeant-poopie-pants
username: sergeant-poopie-pants
groups:
- system:masters
I've tried to do a cat <<EOF > {file} EOF then patch from file. But that option doesn't exist in patch only in the create context.
I also found this: How to patch a ConfigMap in Kubernetes
but it didn't seem to work. or perhaps i didn't really understand the proposed solutions.
There are a few ways to automate things. The direct way would be kubectl get configmap -o yaml ... > cm.yml && patch ... < cm.yml > cm2.yml && kubectl apply -f cm2.yml or something like that. You might want to use a script that parses and modifies the YAML data rather than a literal patch to make it less brittle. You could also do something like EDITOR="myeditscript" kubectl edit configmap ... but that's more clever that I would want to do.
First, note that the mapRoles and mapUsers are actually treated as a string, even though it is structured data (yaml).
While this problem is solvable by jsonpatch, it is much easier using jq and kubectl apply like this:
kubectl get cm aws-auth -o json \
| jq --arg add "`cat add.yaml`" '.data.mapUsers = $add' \
| kubectl apply -f -
Where add.yaml is something like this (notice the lack of extra indentation):
- userarn: arn:aws:iam::123:user/sergeant-poopie-pants
username: sergeant-poopie-pants
groups:
- system:masters
See also https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html for more information.
Here is a kubectl patch one-liner for patching the aws-auth configmap:
kubectl patch configmap -n kube-system aws-auth -p '{"data":{"mapUsers":"[{\"userarn\": \"arn:aws:iam::0000000000000:user/john\", \"username\": \"john\", \"groups\": [\"system:masters\"]}]"}}'