Host node address (alias or static ip?) from inside containers - kubernetes

What is the correct way to address a host node from inside containers?
I have a container that resides on a host node, and the host node has a web server running on it. The container needs to be able to hit web server on the host node.
I expected to find an alias for the host node like node..cluster.local (10.1.111.192), but I can't find it in the documentation.
The environment is microk8s with kubedns enabled.
The address assigned to the host on the calico interface is accessible from inside the node: 10.1.111.192
and I found in the documentation that I can add a hostalias-pod, so I could add the alias, eg. node.local (10.1.111.192). https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
Hardcoding the IP doesn't seem graceful, but I'm in a single-node environment, so it's not likely to matter if the node address doesn't change (does this ever change?). This is a small project where I'm trying to learn though, so I wanted to find the most correct way to do this.

You can use the downward API to get the underlying hostname, worth to mention that it will return the IP of the node where the pod is running on.
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
so from inside pod, you will be able to reach that particular host
curl $HOST_IP:8080
A complete example
apiVersion: v1
kind: Pod
metadata:
name: print-host-ip
spec:
containers:
- name: print-host-ip
image: gcr.io/google_containers/busybox
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: [ "/bin/sh", "-c", 'echo "host ip is $HOST_IP"' ]

Related

Redis master is getting wrong ip, port of slave in kubernetes redis-sentinel

While creating redis pods in kubernetes, redis-slave are able to connect to the master using the address mentioned, but in the redis-master the address of slave is wrong, it is always a address 10.44.0.0, due to which sentinel is not able to do the failover. Is there any way to resolve it?
Redis Replica is announcing its internal pod network IP which is not reachable by other pods, such as master pods. You need to configure these two parameters in redis.conf using "reachable" ips.
sentinel announce-ip <ip>
sentinel announce-port <port>
As you are inside kubernetes you do not know in advance which ip/port will your replica use, so you can use status.podIP variable and pass it using an environment variable. You can see an example here on k8s docs
This can be a draft for your problem
containers:
- name: redis
image: redis
env:
- name: REPLICA_ANNOUNCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- redis-server
args:
- --replica-announce-ip $(REPLICA_ANNOUNCE_IP)
There are other options such us exposing the replicas behing a fixed ip clusterIP service. But IMHO this is simpler.
More info here
Redis sentinel documentation

How to get kubernetes host IP from inside of a pod?

Let's say we have a frontend and a backend pods running in a kubernetes cluster.
Both pods have corresponding services exposing them on the host (type: NodePort). In the end, the frontend uses <Host IP>:<Port 1>, and the backend runs on <Host IP>:<Port 2>.
How to find out the host IP so that it could be used in the frontend pod (to be defined as a value of a variable)? Tried with setting localhost, but it didn't work, so probably the exact IP has to be defined.
Use the downward API:
spec:
image: ...
env:
- name: REACT_APP_BACKEND_URL
valueFrom:
fieldRef:
fieldPath: status.hostIP

How to use a node ip inside a configmap in k8s

I want to inject the value of k8s 'node ip' to a config map when a pod gets created.
Any way how to do that?
A configmap is not bound to a host (multiple pods on different hosts can share the same configmap). But you can get details in a running pod.
You can get the host IP the following way in an environment variable. Add the following in your pods spec section:
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Details about passing other values to env vars can be found in the official documentation.
Unfortunately you can't get the hostIP in a volume, as the downwardAPI doesn't have access to status.hostIP (docu)

How to set node ip as nameserver in dnsConfig?

Im overriding the the dns policy of a pod since I'm facing a issue with default /etc/resolv.conf of the pod. Another issue is that the pod is not able to connect to smtp server server due to default /etc/resolv.conf of the pod
Hence the dnspolicy that is desired to be applied to the deployment/pod is:
dnsConfig:
nameservers:
- <ip-of-the-node>
options:
- name: ndots
value: '5'
searches:
- monitoring.svc.cluster.local
- svc.cluster.local
- cluster.local
dnsPolicy: None
In the above configuration the nameservers needs to be IP of the node where pod gets deployed. Since I have three worker nodes, I cannot hard-code the value to specific worker node's IP. I would not prefer configuring the pod to get deployed to particular node since if the resources are not sufficient for the pod to get deployed in a particular node, the pod might remain in pending state.
How can I make the nameservers to get value of the IP address of the node where pod gets deployed?
Or is it possible to update the nameservers with some kind a of a generic argument so that the pod will be able to connect to smtp server.
dnsConfig support up to 3 IP addresses specified so theoretically you could hard code it in the nameservers field. However as a workaround you can pass node ip address as a env variable and then pass it to the pod. Example:
spec:
containers:
- name: envar-demo-container
command: ["/bin/sh"]
args: ["-c", "printenv NODE_IP >> /etc/resolv.conf"]
image: nginx
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
fieldPath: status.hostIP takes IP address of the node that pod is deployed on and saves it as a environment variable. Then it is written to /etc/resolv.conf.

Identify which GKE Node is serving a client request

I have deployed an application on Google Kubernetes Engine. I would like to identify which client request is being services by which node/pod in GKE. Is there a way to map a client request to the pod/node it was serviced by?
The answer to your question greatly depends on the amount of monitoring and instrumentation you have at your disposal.
The most common way to go about it is to add a prometheus client to the code running on your pods, and use it to write metrics containing labels that can identify the client requests you are interested in.
Once Prometheus scrapes your metrics, they will be enriched with the node/pod emitting them, and you can get the data you are after.
I think The Downward API is what you need. It allows you to expose Pod and node info to the running container. Your application can simply echo the content of certain env variables containing the information you need. This way you can see which Pod and scheduled on which node is handling a particular request.
A few words about what it is from kubernetes documentation:
There are two ways to expose Pod and Container fields to a running
Container:
Environment variables
Volume Files
Together, these two ways of exposing Pod and Container fields are
called the Downward API.
I would recommend you to take a closer look specifically at Exposing Pod Information to Containers Through Environment Variables. The following example Pod exposes to the container its name as well as node name:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
It's just an example that I hope meets your particular requirements but keep in mind that you may expose this way many more relevant information. Take a quick look at the list of Capabilities of the Downward API.