How to get kubernetes host IP from inside of a pod? - kubernetes

Let's say we have a frontend and a backend pods running in a kubernetes cluster.
Both pods have corresponding services exposing them on the host (type: NodePort). In the end, the frontend uses <Host IP>:<Port 1>, and the backend runs on <Host IP>:<Port 2>.
How to find out the host IP so that it could be used in the frontend pod (to be defined as a value of a variable)? Tried with setting localhost, but it didn't work, so probably the exact IP has to be defined.

Use the downward API:
spec:
image: ...
env:
- name: REACT_APP_BACKEND_URL
valueFrom:
fieldRef:
fieldPath: status.hostIP

Related

Host node address (alias or static ip?) from inside containers

What is the correct way to address a host node from inside containers?
I have a container that resides on a host node, and the host node has a web server running on it. The container needs to be able to hit web server on the host node.
I expected to find an alias for the host node like node..cluster.local (10.1.111.192), but I can't find it in the documentation.
The environment is microk8s with kubedns enabled.
The address assigned to the host on the calico interface is accessible from inside the node: 10.1.111.192
and I found in the documentation that I can add a hostalias-pod, so I could add the alias, eg. node.local (10.1.111.192). https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
Hardcoding the IP doesn't seem graceful, but I'm in a single-node environment, so it's not likely to matter if the node address doesn't change (does this ever change?). This is a small project where I'm trying to learn though, so I wanted to find the most correct way to do this.
You can use the downward API to get the underlying hostname, worth to mention that it will return the IP of the node where the pod is running on.
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
so from inside pod, you will be able to reach that particular host
curl $HOST_IP:8080
A complete example
apiVersion: v1
kind: Pod
metadata:
name: print-host-ip
spec:
containers:
- name: print-host-ip
image: gcr.io/google_containers/busybox
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command: [ "/bin/sh", "-c", 'echo "host ip is $HOST_IP"' ]

how can i assign my host ip address into kubernetes configmap?

I assigned my host IP address in the config map. Yaml
But my host IP address always changes
How can I assign my host MAC address or any possible solution?
apiVersion: v1
kind: ConfigMap
metadata:
name: app-configmap
data:
display: 10.0.10.123:0.0
You can't put "the host" IP address into a ConfigMap. Consider a cluster with multiple nodes and multiple replicas of your Deployment: you could have three identical Pods running, all mounting the same ConfigMap, but all running on different hosts.
If you do need the host's IP address for some reason, you can use the downward API to get it:
# In your pod spec, not a ConfigMap
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Again, though, note that each replica could be running on a different node, so this is only useful if you can guarantee some resource is running on every node (maybe a Kubernetes DaemonSet is launching it). That configuration suggests an X Window System display server address, and typically this would be located outside the cluster, not on the nodes actually running the pods.

Redis master is getting wrong ip, port of slave in kubernetes redis-sentinel

While creating redis pods in kubernetes, redis-slave are able to connect to the master using the address mentioned, but in the redis-master the address of slave is wrong, it is always a address 10.44.0.0, due to which sentinel is not able to do the failover. Is there any way to resolve it?
Redis Replica is announcing its internal pod network IP which is not reachable by other pods, such as master pods. You need to configure these two parameters in redis.conf using "reachable" ips.
sentinel announce-ip <ip>
sentinel announce-port <port>
As you are inside kubernetes you do not know in advance which ip/port will your replica use, so you can use status.podIP variable and pass it using an environment variable. You can see an example here on k8s docs
This can be a draft for your problem
containers:
- name: redis
image: redis
env:
- name: REPLICA_ANNOUNCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
command:
- redis-server
args:
- --replica-announce-ip $(REPLICA_ANNOUNCE_IP)
There are other options such us exposing the replicas behing a fixed ip clusterIP service. But IMHO this is simpler.
More info here
Redis sentinel documentation

How to set node ip as nameserver in dnsConfig?

Im overriding the the dns policy of a pod since I'm facing a issue with default /etc/resolv.conf of the pod. Another issue is that the pod is not able to connect to smtp server server due to default /etc/resolv.conf of the pod
Hence the dnspolicy that is desired to be applied to the deployment/pod is:
dnsConfig:
nameservers:
- <ip-of-the-node>
options:
- name: ndots
value: '5'
searches:
- monitoring.svc.cluster.local
- svc.cluster.local
- cluster.local
dnsPolicy: None
In the above configuration the nameservers needs to be IP of the node where pod gets deployed. Since I have three worker nodes, I cannot hard-code the value to specific worker node's IP. I would not prefer configuring the pod to get deployed to particular node since if the resources are not sufficient for the pod to get deployed in a particular node, the pod might remain in pending state.
How can I make the nameservers to get value of the IP address of the node where pod gets deployed?
Or is it possible to update the nameservers with some kind a of a generic argument so that the pod will be able to connect to smtp server.
dnsConfig support up to 3 IP addresses specified so theoretically you could hard code it in the nameservers field. However as a workaround you can pass node ip address as a env variable and then pass it to the pod. Example:
spec:
containers:
- name: envar-demo-container
command: ["/bin/sh"]
args: ["-c", "printenv NODE_IP >> /etc/resolv.conf"]
image: nginx
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
fieldPath: status.hostIP takes IP address of the node that pod is deployed on and saves it as a environment variable. Then it is written to /etc/resolv.conf.

How configure the application.server dynamically for Kafka Streams remote interactive queries on a spring boot app running on Kubernetes

We have a Kubernetes cluster with three pod running, i want to know what are the RPC endpoint we need to provide in application.server to make interactive query work.
So we have a use case where we need to query state-store with gRPC server.
while creating gRPC server we are giving 50052 as a port..
but i am not able to get what should be the value of application.server as it take Host:Port
for host do we need to give the endpoint ip of each pod and port as 50052?
For Example below:
$>kubectl get ep
NAME ENDPOINTS AGE
myapp 10.8.2.85:8080,10.8.2.88:8080 10d
Pod1 -> 10.8.2.85:8080
Pod2 -> 10.8.2.88:8080
So the value of application.server will be?
1. 10.8.2.85:50052 (port is what i am giving in gRPC server)
2. 10.8.2.88:50052 (port is what i am giving in gRPC server)
If above application.server values are correct then How to get this POD IP dynamically?
You can make your pods IP address available as an environment variable and then reference the environment variable in your application.yml
See: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/
pod.yml
apiVersion: v1
kind: Pod
metadata:
name: kafka-stream-app
spec:
containers:
- name: kafka-stream-app
#... some more configs
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
application.yml
spring:
cloud:
stream:
kafka:
streams:
binder:
configuration:
application.server: ${MY_POD_IP}:50052