How to use a node ip inside a configmap in k8s - kubernetes

I want to inject the value of k8s 'node ip' to a config map when a pod gets created.
Any way how to do that?

A configmap is not bound to a host (multiple pods on different hosts can share the same configmap). But you can get details in a running pod.
You can get the host IP the following way in an environment variable. Add the following in your pods spec section:
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Details about passing other values to env vars can be found in the official documentation.
Unfortunately you can't get the hostIP in a volume, as the downwardAPI doesn't have access to status.hostIP (docu)

Related

Host node address (alias or static ip?) from inside containers

What is the correct way to address a host node from inside containers?
I have a container that resides on a host node, and the host node has a web server running on it. The container needs to be able to hit web server on the host node.
I expected to find an alias for the host node like node..cluster.local (10.1.111.192), but I can't find it in the documentation.
The environment is microk8s with kubedns enabled.
The address assigned to the host on the calico interface is accessible from inside the node: 10.1.111.192
and I found in the documentation that I can add a hostalias-pod, so I could add the alias, eg. node.local (10.1.111.192). https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
Hardcoding the IP doesn't seem graceful, but I'm in a single-node environment, so it's not likely to matter if the node address doesn't change (does this ever change?). This is a small project where I'm trying to learn though, so I wanted to find the most correct way to do this.
You can use the downward API to get the underlying hostname, worth to mention that it will return the IP of the node where the pod is running on.
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
so from inside pod, you will be able to reach that particular host
curl $HOST_IP:8080
A complete example
apiVersion: v1
kind: Pod
metadata:
name: print-host-ip
spec:
containers:
- name: print-host-ip
image: gcr.io/google_containers/busybox
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: [ "/bin/sh", "-c", 'echo "host ip is $HOST_IP"' ]

how can i assign my host ip address into kubernetes configmap?

I assigned my host IP address in the config map. Yaml
But my host IP address always changes
How can I assign my host MAC address or any possible solution?
apiVersion: v1
kind: ConfigMap
metadata:
name: app-configmap
data:
display: 10.0.10.123:0.0
You can't put "the host" IP address into a ConfigMap. Consider a cluster with multiple nodes and multiple replicas of your Deployment: you could have three identical Pods running, all mounting the same ConfigMap, but all running on different hosts.
If you do need the host's IP address for some reason, you can use the downward API to get it:
# In your pod spec, not a ConfigMap
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Again, though, note that each replica could be running on a different node, so this is only useful if you can guarantee some resource is running on every node (maybe a Kubernetes DaemonSet is launching it). That configuration suggests an X Window System display server address, and typically this would be located outside the cluster, not on the nodes actually running the pods.

How to set node ip as nameserver in dnsConfig?

Im overriding the the dns policy of a pod since I'm facing a issue with default /etc/resolv.conf of the pod. Another issue is that the pod is not able to connect to smtp server server due to default /etc/resolv.conf of the pod
Hence the dnspolicy that is desired to be applied to the deployment/pod is:
dnsConfig:
nameservers:
- <ip-of-the-node>
options:
- name: ndots
value: '5'
searches:
- monitoring.svc.cluster.local
- svc.cluster.local
- cluster.local
dnsPolicy: None
In the above configuration the nameservers needs to be IP of the node where pod gets deployed. Since I have three worker nodes, I cannot hard-code the value to specific worker node's IP. I would not prefer configuring the pod to get deployed to particular node since if the resources are not sufficient for the pod to get deployed in a particular node, the pod might remain in pending state.
How can I make the nameservers to get value of the IP address of the node where pod gets deployed?
Or is it possible to update the nameservers with some kind a of a generic argument so that the pod will be able to connect to smtp server.
dnsConfig support up to 3 IP addresses specified so theoretically you could hard code it in the nameservers field. However as a workaround you can pass node ip address as a env variable and then pass it to the pod. Example:
spec:
containers:
- name: envar-demo-container
command: ["/bin/sh"]
args: ["-c", "printenv NODE_IP >> /etc/resolv.conf"]
image: nginx
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
fieldPath: status.hostIP takes IP address of the node that pod is deployed on and saves it as a environment variable. Then it is written to /etc/resolv.conf.

StatefulSet - Get starting pod during volumemount

I have a StatefulSet that starts a MYSQL cluster. The only downside at it for the moment is that for every replica I need to create a Persistent Volume and a Persistent Volume Claim with a select that matches label and podindex.
This means I cannot dynamically add replicas whithout manual interaction.
For this reason I'm searching for a soluction that gives me the option to have only 1 Volume and 1 Claim. And during the pod creation it knows his own pod name for the subPath during mount. (initContainer would be used to check and create the directories on the volume before the application container starts).
So I search a correct way for a code like:
volumeMounts:
- name: mysql-datadir
mountPath: /var/lib/mysql
subPath: "${PODNAME}/datadir"
You can get POD_NAME from the metadata ( the downward API ) by setting ENV var:
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
But, you you cannot use ENV vars in volumes declarations (as far as i know...). So, everything else could be reached via workarounds. One of the workarounds is described here

Is there a way to get ordinal index of a pod with in kubernetes statefulset configuration file?

We are on Kubernetes 1.9.0 and wonder if there is way to access an "ordinal index" of a pod with in its statefulset configuration file. We like to dynamically assign a value (that's derived from the ordinal index) to the pod's label and later use it for setting pod affinity (or antiaffinity) under spec.
Alternatively, is the pod's instance name available with in statefulset configfile? If so, we can hopefully extract ordinal index from it and dynamically assign to a label (for later use for affinity).
You could essentially get the unique name of your pod in statefulset as an environment variable, you have to extract the ordinal index from it though
In container's spec:
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
Right now the only option is to extract index from host name
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "export INDEX=${HOSTNAME##*-}"]