Pass statefulset's replica count to it's pod - kubernetes

I have a statefulset and I need to know what is the current replica count from inside the pod. To do so, I tried:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: sample-mariadb
namespace: demo
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: REPLICAS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.replicas
and got this error:
Warning FailedCreate 4m15s (x17 over 9m43s) statefulset-controller create Pod sample-mariadb-0 in StatefulSet sample-mariadb failed error: Pod "sample-mariadb-0" is invalid: spec.containers[1].env[3].valueFrom.fieldRef.fieldPath: Invalid value: "spec.replicas": error converting fieldPath: field label not supported: spec.replicas
How can I get current replica count from inside the pod?

You can only send the fields that are part of Pod specification. The spec.replicas field is part of StatefulSet specification not the underlying pod's. The template part of a StatefulSet is the pod specification. Hence, you are getting this error.

I have looked for a solution, and the alternatives I could find is:
Setting the same environment variable
- name: REPLICAS
value: "3"
Or you can update the value of the replicas as template variable. Example updating it as template.yaml:
replicas: ${num_replicas}
spec:
- name: REPLICAS
value: ${num_replicas}
export num_replicas=3
cat template.yaml | envsubst > dapi-stateful.yaml

Related

Kubernetes keep variables fixed to pods

I have an application that has 3 pods and each pod needs a fixed variable name stored in each pod. So if everything is running fine, the three pods would have var1, var2, and var3 stored on the corresponding pods.
If the first pod gets replaced which has var1, how can I determine that the other 2 pods have var2 and var3, and thus know that the new pod should be assigned var1?
Can this be done with Stateful Sets?
I see two ways of doing that:
Using StatefulSets:
For a StatefulSet with N replicas, each Pod in the StatefulSet will be
assigned an integer ordinal, from 0 up through N-1, that is unique
over the Set.
Creating the Pods manually. Example:
apiVersion: v1
kind: Pod
metadata:
name: memory-demo-3
namespace: mem-example
spec:
containers:
- name: memory-demo-3-ctr
image: polinux/stress
If you need your application to be aware of the Pod where it is running on, there is an interesting page in Kubernetes documentation: "Expose Pod Information to Containers Through Environment Variables".
Example:
apiVersion: v1
kind: Pod
metadata:
name: mypod-var1
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Using a StatefulSet you can extract this from the pod-name.
env:
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
and then get it from the end of the name. The pods in a StatefulSet will be named <StatfulSetName>-<ordinal>, see pod-identity

Is it possible to know if the node where a Kubernetes Pod is being scheduled is master or worker?

I'm currently using Kubernetes to schedule a DaemonSet on both master and worker nodes.
The DaemonSet definition is the same for both node types (same image, same volumes, etc), the only difference is that when the entrypoint is executed, I need to write a different configuration file (which is generated in Python with some dynamic values) if the node is a master or a worker.
Currently, to overcome this I'm using two different DaemonSet definitions with an env value which tells if the node is a master or not. Here's the yaml file (only relevant parts):
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: worker-ds
namespace: kube-system
labels:
k8s-app: worker
spec:
...
spec:
hostNetwork: true
containers:
- name: my-image
...
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: IS_MASTER
value: "false"
...
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: master-ds
namespace: kube-system
labels:
k8s-app: master
spec:
...
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: my-image
...
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: IS_MASTER
value: "true"
...
However, since the only difference is the IS_MASTER value, I want to collapse both the definitions in a single one that programmatically understands if the current node where the pod is being scheduled is a master or a worker.
Is there any way to know this information about the node programmatically (even reading a configuration file [for example something that only the master has or viceversa] in the node or something like that)?
Thanks in advance.
Unfortunately, there is not a convenient way to access the node information in pod.
If you only want a single DaemonSet definition, you can add a sidecar container to your pod, the sidecar container can access the k8s api, then your main container can get something useful from the sidecar.
By the way, I think your current solution is properly :)
You can tell a node is the master if it has the label node-role.kubernetes.io/master: "".
What you need to do is access that label from your containers which can be done with the Downward Api (Edit: Wrong, only Pod information can be accessed from the Downward Api). You can mount the labels inside your containers using:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
You can then search the content of that file from within the container.

Kube / create deployment with config map

I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.

In kubernetes How can I use metadata name from Replication Controller inside my pod?

In kubernetes How can I use metadata name from Replication Controller inside my pod?
Lets say I want to pass :
name: sparkworker1-rc
To my pods so I can use it as a parameter for log file, for example:
- name: "JAVA_OPTS"
value: "-DMY_RC_NAME=$(MY_RC_NAME)"
But instead of getting "sparkworker1-rc" I get the name of the pod that is running sparkworker1-rc-(name_of_the_pod).
This is my YAML:
kind: ReplicationController
apiVersion: v1
metadata:
name: sparkworker1-rc
spec:
replicas: 1
selector:
component: spark-worker1
template:
metadata:
labels:
component: spark-worker1
annotations:
pod.beta.kubernetes.io/hostname: worker1
Does anyone knows how I can get the RC name and NOT the pod name?
Not sure we can access the metadata value inside the container. other option is you can pass metadata as environment variable.
like this.
env:
- name: METADATANAME
value: sparkworker1-rc

Kubernetes deployment name from within a pod?

How can I to source the Kubernetes deployment/job name that spawned the current pod from within the pod?
In many cases the hostname of the Pod equals to the name of the Pod (you can access that by the HOSTNAME environment variable). However that's not a reliable method of determining the Pod's identity.
You will want to you use the Downward API which allows you to expose metadata as environment variables and/or files on a volume.
The name and namespace of a Pod can be exposed as environment variables (fields: metadata.name and metadata.namespace) but the information about the creator of a Pod (which is the annotation kubernetes.io/created-by) can only be exposed as a file.
Example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: busybox
labels: {app: busybox}
spec:
selector: {matchLabels: {app: busybox}}
template:
metadata: {labels: {app: busybox}}
spec:
containers:
- name: busybox
image: busybox
command:
- "sh"
- "-c"
- |
echo "I am $MY_POD_NAME in the namespace $MY_POD_NAMESPACE"
echo
grep ".*" /etc/podinfo/*
while :; do sleep 3600; done
env:
- name: MY_POD_NAME
valueFrom: {fieldRef: {fieldPath: metadata.name}}
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo/
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef: {fieldPath: metadata.labels}
- path: "annotations"
fieldRef: {fieldPath: metadata.annotations}
Too see the output:
$ kubectl logs `kubectl get pod -l app=busybox -o name | cut -d / -f2`
Output:
I am busybox-1704453464-m1b9h in the namespace default
/etc/podinfo/annotations:kubernetes.io/config.seen="2017-02-16T16:46:57.831347234Z"
/etc/podinfo/annotations:kubernetes.io/config.source="api"
/etc/podinfo/annotations:kubernetes.io/created-by="{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"busybox-1704453464\",\"uid\":\"87b86370-f467-11e6-8d47-525400247352\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"191157\"}}\n"
/etc/podinfo/annotations:kubernetes.io/limit-ranger="LimitRanger plugin set: cpu request for container busybox"
/etc/podinfo/labels:app="busybox"
/etc/podinfo/labels:pod-template-hash="1704453464"
If you are using the Downwards API to get deployment name from inside the pod, and you want to avoid using the volume mount way - there is one opinionated way to get deployment info, exposed to pod as environment variables.
Template labels specified in a Deployment spec are added as pod labels to each pod of that deployment.
Example : the app label below will be added to all pods of this deployment
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
...
It is a commonly followed (again, not necessarily true for your case) convention for deployments, to keep the app label value same as the the deployment name, as shown in the above example. If your deployments follow this convention (mine did), you can expose this label's value (essentially, the name of deployment) as an environment variable to the pod, using the downwards API
Continuing on above example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
Again, clarifying that this is not a guaranteed solution for your problem as it still does not give the deployment name in env vars. It is just an opinionated way which I found useful and thought would be good to share.
In my case, there were a lot of deployments (>20) and I didn't want to add the deployment name manually as an env variable, for each of the deployment config. As my deployments already followed the above convention, I just copied the bit of yaml specifying NAMESPACE and DEPLOYMENT_NAME variable to each deployment config
references :
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api