Add to configmap pod name in Kubernetes - kubernetes

I'm trying to figure it out, how to change one string inside configmap in Kubernetes. I have pretty simple configmap:
apiVersion: v1
data:
config.cfg: |-
[authentication]
USERNAME=user
PASSWORD=password
[podname]
PODNAME=metadata.podName
kind: ConfigMap
metadata:
name: name_here
And I need to mount the configmap inside a couple of pods. But PODNAME should be matched to current podname. Is it possible in any another way? thanks!

I do not think it could be done with ConfigMap. But you can set environment variables in your pod spec that references a pod fields.
apiVersion: v1
kind: Pod
metadata:
name: test-ref-pod-name
spec:
containers:
- name: test-container
image: busybox
command: [ "sh", "-c"]
args:
- env | grep PODNAME
env:
- name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
See official documentation: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables

This doesn't answer you question exactly but the pod name normally ends up as the host name inside the pod and can be accessed as a standard environment variable
echo $HOSTNAME

Related

Why cannot K8s pod read stored secret?

I cannot access a secret I created. I inserted a secret in K8s secret store and am simply trying to test access to it with this yaml...
apiVersion: v1
kind: Namespace
metadata:
name: space1
---
apiVersion: v1
kind: Pod
metadata:
name: space1
namespace: space1
spec:
containers:
- name: space1-pod
image: repo/python-image:latest
imagePullPolicy: Always
command: ['sh', '-c', 'echo "Username: $USER" "Password: $PASSWORD"']
env:
- name: USER
valueFrom:
secretKeyRef:
name: tool-user
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name:tool-user
key: password
restartPolicy: Always
The status of the "pod is waiting to start: CreateContainerConfigError". And I receive this error...
Error: secret "tool-user" not found
Despite the result I get from "kubectl get secrets" which clearly shows...
NAME TYPE DATA AGE
tool-user Opaque 2 4d1h
kubectl get secrets shows secrets from a default namespace, add -n space1 to see secrets from the namespace your pod runs in.
secrets are namespaced objects. Make sure the secret "tool-user" is created on the "secret1" namespace.

Kubernetes: how to get pod name I'm running in?

I want to run Python code inside a pod. The pod is created by airflow that I don't control.
I want to somehow get the name of the pod I'm running in.
How can it be done?
You can tell kuberenetes to mount an env variable for you:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and then in python you can access it like:
import os
pod_name = os.environ['MY_POD_NAME']
Or you can just open and read /etc/hostname:
f = open('/etc/hostname')
pod_name = f.read()
f.close()
Exposing Pod and Cluster Vars to Containers
Lets say you need some data about Pod or K8s environment in your application to add Pod informnation as metada tp logs. such as e.g.
Pod IP
Pod Name
Service Account of Pod
NOTE: All Pod information can be made available in the config file.
There are 2 ways to expose Pod fields into a running Container:
Environment Variables
Volume Files
Example of Environment Variable
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-env
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: log-sider
image: busybox
command: [ 'sh', '-c' ]
args:
- while true; do
echo sync app logs;
printenv POD_NAME POD_IP POD_SERVICE_ASCCOUNT;
sleep 20;
done;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ASCCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Kubernets Docs
Source
one way can be
So in the Kubernetes cluster you are operating in do
kubectl get pods
now see the yaml of all pods by
oc get pods <pod-name> -o yaml
then in that find the container images being used by the pods .
identify the image tag that belongs to your container creation.
That means that when you build you image , image has a name and a tag , which is further pushed to some cloud hub from where your pod will pull the image and start the container .
you need to find the image tag and name in pod yaml using the above commands given .
Try the below :
# List all pods in all namespaces
kubectl get pods --all-namespaces
# List all pods in the current namespace
kubectl get pods -o wide
Then u can see more details using the below :
kubectl describe pod <pod-name>
Also you can refer to the following stackoverflow question and the related answers.
get-current-pod

using ConfigMap created using Generator in Kustomize/Kubernetes

I have been trying to figure out how to consume a ConfigMap created using a ConfigMap generator via Kustomize.
When created using Kustomize generators, the configMaps are named with a special suffix. See here:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-a-configmap-from-generator
Question is how can this be referenced?
You don't reference it yourself. Kustomize recognizes where the configMap is used in the other resources (like a Deployment) and changes those references to use the name+hash.
The reason for this is so that if you change the configMap, Kustomize generates a new hash and updates the Deployment, causing a rolling restart of the Pods.
If you don't want this behavior, you can add the following to your kustomization.yaml file:
generatorOptions:
disableNameSuffixHash: true
It is specified there in the doc. When you do kubectl apply -k . a configmap created named game-config-4-m9dm2f92bt.
You can check that the ConfigMap was created like this: kubectl get configmap. This ConfigMap will contains a field data where your given datas will belong.
Now as usual you can use this configmap in a pod. Like below:
Ex from k8s:
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
You can use ConfigMap as volume also, like this example from k8s doc:
apiVersion: v1
kind: Pod
metadata:
name: configmap-demo-pod
spec:
containers:
- name: demo
image: alpine
command: ["sleep", "3600"]
env:
# Define the environment variable
- name: PLAYER_INITIAL_LIVES # Notice that the case is different here
# from the key name in the ConfigMap.
valueFrom:
configMapKeyRef:
name: game-demo # The ConfigMap this value comes from.
key: player_initial_lives # The key to fetch.
- name: UI_PROPERTIES_FILE_NAME
valueFrom:
configMapKeyRef:
name: game-demo
key: ui_properties_file_name
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
# You set volumes at the Pod level, then mount them into containers inside that Pod
- name: config
configMap:
# Provide the name of the ConfigMap you want to mount.
name: game-demo
# An array of keys from the ConfigMap to create as files
items:
- key: "game.properties"
path: "game.properties"
- key: "user-interface.properties"
path: "user-interface.properties
You can see k8s official doc
I was struggling with this too. I could not figure out why kustomize was not updating the configmap name for the volume in the deployment to include the hash. What solved this for me was to add namespace: <namespace> in the kustomization.yaml for both the base and overlay.

Get value of configMap from mountPath

I created configmap this way.
kubectl create configmap some-config --from-literal=key4=value1
After that i created pod which looks like this
.
I connect to this pod this way
k exec -it nginx-configmap -- /bin/sh
I found the folder /some/path but i could get value from key4.
If you refer to your ConfigMap in your Pod this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: config-volume
volumes:
- name: config-volume
configMap:
name: some-config
it will be available in your Pod as a file /var/www/html/key4 with the content of value1.
If you rather want it to be available as an environment variable you need to refer to it this way:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
envFrom:
- configMapRef:
name: some-config
As you can see you don't need for it any volumes and volume mounts.
Once you connect to such Pod by running:
kubectl exec -ti mypod -- /bin/bash
You will see that your environment variable is defined:
root#mypod:/# echo $key4
value1

Kubernetes keep variables fixed to pods

I have an application that has 3 pods and each pod needs a fixed variable name stored in each pod. So if everything is running fine, the three pods would have var1, var2, and var3 stored on the corresponding pods.
If the first pod gets replaced which has var1, how can I determine that the other 2 pods have var2 and var3, and thus know that the new pod should be assigned var1?
Can this be done with Stateful Sets?
I see two ways of doing that:
Using StatefulSets:
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
assigned an integer ordinal, from 0 up through N-1, that is unique
over the Set.
Creating the Pods manually. Example:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo-3
namespace: mem-example
spec:
containers:
- name: memory-demo-3-ctr
image: polinux/stress
If you need your application to be aware of the Pod where it is running on, there is an interesting page in Kubernetes documentation: "Expose Pod Information to Containers Through Environment Variables".
Example:
apiVersion: v1
kind: Pod
metadata:
name: mypod-var1
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Using a StatefulSet you can extract this from the pod-name.
env:
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
and then get it from the end of the name. The pods in a StatefulSet will be named <StatfulSetName>-<ordinal>, see pod-identity