I'm trying to generate a Unique Key in the application, using date/time and a number sequence managed by the application. It works fine as we don't have multiapplication.
The application is running in a Kubernetes pod with auto scaling configured.
Is there any way to generate or get a unique and numeric identifier per pod and put them in the container environment variables? there is no need for the intentifier to be fixed to use the statefulSets
UPDATE
the problem we are having with the uid is the size of the collections, tha's why we're are looking for a solution that's about the size of a bigInt, and if there is any other numberic unique id similar as an alternative of use for the UID.
...get a unique and numeric identifier per pod and put them in the container environment variables?
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
run: busybox
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
command: ["ash","-c","echo ${MY_UID} && sleep 3600"]
env:
- name: MY_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
Run kubectl logs <pod> will print you the unique ID assigned to the environment variable in your pod.
Related
I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
I want to define pvc and create volume in order to get some internal files from container outside (I am using helm charts definitions). I want to know is there any way to use POD IP in mountPath that I am defining in deployment.yaml.
At the end I want to get folder structure in my node
/dockerdata-nfs//path
volumeMounts:
- name: volumeName
mountPath: /abc/path
volumes:
- name: volumeName
hostPath:
path: /dockerdata-nfs/podID/
You can create a mountPath based on the POD UID using the subPathExpr. Yaml below:
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
image: busybox
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(UID)
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
This feature was introduced in Kubernetes version 1.14+.
POD on recreation will get a new UID so why will you try to hard code this value !!
Pods are considered to be relatively ephemeral (rather than durable) entities. As discussed in pod lifecycle, Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a Node dies, the Pods scheduled to that node are scheduled for deletion, after a timeout period. A given Pod (as defined by a UID) is not “rescheduled” to a new node; instead, it can be replaced by an identical Pod, with even the same name if desired, but with a new UID (see replication controller for more details).
We are on Kubernetes 1.9.0 and wonder if there is way to access an "ordinal index" of a pod with in its statefulset configuration file. We like to dynamically assign a value (that's derived from the ordinal index) to the pod's label and later use it for setting pod affinity (or antiaffinity) under spec.
Alternatively, is the pod's instance name available with in statefulset configfile? If so, we can hopefully extract ordinal index from it and dynamically assign to a label (for later use for affinity).
You could essentially get the unique name of your pod in statefulset as an environment variable, you have to extract the ordinal index from it though
In container's spec:
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
Right now the only option is to extract index from host name
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export INDEX=${HOSTNAME##*-}"]
I have a use case where I publish messages from containers deployed in various regions and I would like to tag those messages from the region they originated from. Also, I want to do this in a container engine agnostic way so specifically want to access the region info as an environment variable.
You can expose pod information as an environment variable using the Downward API
However, this isn't supported for node labels, as per these github issues.
What you can do is follow this example and labels your pods/deployments (and also maybe pin those pods/deployments using a NodeSelector) and then expose that info. An example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
labels:
zone: us-west-2
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c", $(ZONE)]
env:
- name: ZONE
valueFrom:
fieldRef:
fieldPath: metadata.labels.zone
restartPolicy: Never
Please note, I haven't tested this so YMMV
Say I have the following pod spec.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: deployment-example
spec:
# 3 Pods should exist at all times.
replicas: 3
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
- name: nginx
# Run this image
image: nginx:1.10
Here, the name of the container is nginx. Is there a way to get the "nginx" string from within the running container?
I mean, once I exec into the container with something like
kubectl exec -it <pod-name> -c nginx bash
Is there a programmatic way to get to the given container name in the pod spec ?
Note that this is not necessarily the docker container name that gets printed in docker ps. Kubernetes composes a longer name for the spawned docker container.
The downward api looks promising in this regard. However container name is not mentioned in the Capabilities of the Downward API section.
The container name is not available trough the downward api. You can use yaml anchors and aliases (references). Unfortunately they are not scoped so you will have to come up with unique names for the anchors - it does not matter what they are as they are not present in the parsed document.
Subsequent occurrences of a previously serialized node are presented as alias nodes. The first occurrence of the node must be marked by an anchor to allow subsequent occurrences to be presented as alias nodes.
An alias node is denoted by the “*” indicator. The alias refers to the most recent preceding node having the same anchor. It is an error for an alias node to use an anchor that does not previously occur in the document. It is not an error to specify an anchor that is not used by any alias node.
First occurrence: &anchor Foo
Second occurrence: *anchor
Override anchor: &anchor Bar
Reuse anchor: *anchor
Here is a full working example:
apiVersion: v1
kind: Pod
metadata:
name: reftest
spec:
containers:
- name: &container1name first
image: nginx:1.10
env:
- name: MY_CONTAINER_NAME
value: *container1name
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: &container2name second
image: nginx:1.10
env:
- name: MY_CONTAINER_NAME
value: *container2name
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Not sure container name within the deployment is available somehow.
For the deployment name, one way that works in OpenShift for deployment configs (and so presumably Kubernetes deployments), is to take the value of the HOSTNAME environment variable, which will be of the form <deployment-name>-<deployment-number>-<random-string>.
Drop from second last - onwards and the lead component is the deployment name.
Would be a fair bit of mucking around, but one could maybe then infer the container name somehow by querying the REST API for deployment resource object based on that deployment name.
What specifically are you after the container name for? If knew what you need it for, may be able to suggest other options.
How about using the container hostname then chopping off the generated components?
$ kubectl exec alpine-tools-645f786645-vfp82 hostname | cut -d- -f1,2
alpine-tools
Although this is very dependent on how you name Pods/containers..
$ kubectl exec -it alpine-tools-645f786645-vfp82 /bin/sh
/ # hostname
alpine-tools-645f786645-vfp82
/ # hostname | cut -d- -f1,2
alpine-tools