StatefulSet - Get starting pod during volumemount - kubernetes

I have a StatefulSet that starts a MYSQL cluster. The only downside at it for the moment is that for every replica I need to create a Persistent Volume and a Persistent Volume Claim with a select that matches label and podindex.
This means I cannot dynamically add replicas whithout manual interaction.
For this reason I'm searching for a soluction that gives me the option to have only 1 Volume and 1 Claim. And during the pod creation it knows his own pod name for the subPath during mount. (initContainer would be used to check and create the directories on the volume before the application container starts).
So I search a correct way for a code like:
volumeMounts:
- name: mysql-datadir
mountPath: /var/lib/mysql
subPath: "${PODNAME}/datadir"

You can get POD_NAME from the metadata ( the downward API ) by setting ENV var:
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
But, you you cannot use ENV vars in volumes declarations (as far as i know...). So, everything else could be reached via workarounds. One of the workarounds is described here

Related

How to use a node ip inside a configmap in k8s

I want to inject the value of k8s 'node ip' to a config map when a pod gets created.
Any way how to do that?
A configmap is not bound to a host (multiple pods on different hosts can share the same configmap). But you can get details in a running pod.
You can get the host IP the following way in an environment variable. Add the following in your pods spec section:
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Details about passing other values to env vars can be found in the official documentation.
Unfortunately you can't get the hostIP in a volume, as the downwardAPI doesn't have access to status.hostIP (docu)

Mounting kubernetes volume with User permission

I am using Kubernetes yaml to mount a volume.
I know I can set the mount folder to be for a specific group using this configuration:
securityContext:
fsGroup: 999
but no where I can find a way to also set user ownership and not just the group.
When I access the container folder to check ownership, it is root.
Anyway to do so via kubernetes Yaml?
I would expect fsUser: 999 for example, but there is no such thing. :/
There is no way to set the UID using the definition of Pod but you can use an initContainer with the same volumeMount as the main container to set the required permissions.
It is handy in cases like yours where user ownership needs to be set to a non root value.
Here is a sample configuration (change it as per your need):
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: false
initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chown -R 999:999 /data/demo"]
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
No, there is no such option. To check every option, available in securityContext, you may use
kubectl explain deployment.spec.template.spec.securityContext
As per doc
fsGroup <integer>
A special supplemental group that applies to all containers in a pod. Some
volume types allow the Kubelet to change the ownership of that volume to be
owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit
is set (new files created in the volume will be owned by FSGroup) 3. The
permission bits are OR'd with rw-rw---- If unset, the Kubelet will not
modify the ownership and permissions of any volume.
It's usually a good idea to handle access to files via group ownership, because in restricted kubernetes configurations you can't actually control user id or group id, for example in RedHat Openshift.
What you can do is to use runAsUser, if your kubernetes provider allows it
runAsUser <integer>
The UID to run the entrypoint of the container process. Defaults to user
specified in image metadata if unspecified. May also be set in
SecurityContext. If set in both SecurityContext and PodSecurityContext, the
value specified in SecurityContext takes precedence for that container.
You application will work with uid you want, and naturally will create and access files as you want. As noted earlier, it's usually not the best idea to do it that way, because it makes distribution of your applications harder on different platforms.

Identify which GKE Node is serving a client request

I have deployed an application on Google Kubernetes Engine. I would like to identify which client request is being services by which node/pod in GKE. Is there a way to map a client request to the pod/node it was serviced by?
The answer to your question greatly depends on the amount of monitoring and instrumentation you have at your disposal.
The most common way to go about it is to add a prometheus client to the code running on your pods, and use it to write metrics containing labels that can identify the client requests you are interested in.
Once Prometheus scrapes your metrics, they will be enriched with the node/pod emitting them, and you can get the data you are after.
I think The Downward API is what you need. It allows you to expose Pod and node info to the running container. Your application can simply echo the content of certain env variables containing the information you need. This way you can see which Pod and scheduled on which node is handling a particular request.
A few words about what it is from kubernetes documentation:
There are two ways to expose Pod and Container fields to a running
Container:
Environment variables
Volume Files
Together, these two ways of exposing Pod and Container fields are
called the Downward API.
I would recommend you to take a closer look specifically at Exposing Pod Information to Containers Through Environment Variables. The following example Pod exposes to the container its name as well as node name:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
It's just an example that I hope meets your particular requirements but keep in mind that you may expose this way many more relevant information. Take a quick look at the list of Capabilities of the Downward API.

Kubernetes. Is it possible to mount volume to path containing pod id?

I want to define pvc and create volume in order to get some internal files from container outside (I am using helm charts definitions). I want to know is there any way to use POD IP in mountPath that I am defining in deployment.yaml.
At the end I want to get folder structure in my node
/dockerdata-nfs//path
volumeMounts:
- name: volumeName
mountPath: /abc/path
volumes:
- name: volumeName
hostPath:
path: /dockerdata-nfs/podID/
You can create a mountPath based on the POD UID using the subPathExpr. Yaml below:
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: container1
env:
- name: UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
image: busybox
volumeMounts:
- name: workdir1
mountPath: /logs
subPathExpr: $(UID)
volumes:
- name: workdir1
hostPath:
path: /var/log/pods
This feature was introduced in Kubernetes version 1.14+.
POD on recreation will get a new UID so why will you try to hard code this value !!
Pods are considered to be relatively ephemeral (rather than durable) entities. As discussed in pod lifecycle, Pods are created, assigned a unique ID (UID), and scheduled to nodes where they remain until termination (according to restart policy) or deletion. If a Node dies, the Pods scheduled to that node are scheduled for deletion, after a timeout period. A given Pod (as defined by a UID) is not “rescheduled” to a new node; instead, it can be replaced by an identical Pod, with even the same name if desired, but with a new UID (see replication controller for more details).

Is there a way to get ordinal index of a pod with in kubernetes statefulset configuration file?

We are on Kubernetes 1.9.0 and wonder if there is way to access an "ordinal index" of a pod with in its statefulset configuration file. We like to dynamically assign a value (that's derived from the ordinal index) to the pod's label and later use it for setting pod affinity (or antiaffinity) under spec.
Alternatively, is the pod's instance name available with in statefulset configfile? If so, we can hopefully extract ordinal index from it and dynamically assign to a label (for later use for affinity).
You could essentially get the unique name of your pod in statefulset as an environment variable, you have to extract the ordinal index from it though
In container's spec:
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
Right now the only option is to extract index from host name
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export INDEX=${HOSTNAME##*-}"]