Kubernetes access to port on node from inside pod? - kubernetes

I am trying to access a service listening on a port running on every node in my bare metal (Ubuntu 20.04) cluster from inside a pod. I can use the real IP address of one of the nodes and it works. However I need pods to connect to the port on their own node. I cant use '127.0.0.1' inside a pod.
More info: I am trying to wrangle a bunch of existing services into k8s. We use an old version of Consul for service discovery and have it running on every node providing DNS on 8600. I figured out how to edit the coredns Corefile to add a consul { } block so lookups for .consul work.
consul {
errors
cache 30
forward . 157.90.123.123:8600
}
However I need to replace that IP address with the "address of the node the coredns pod is running on".
Any ideas? Or other ways to solve this problem? Tx.

Comment from #mdaniel worked. Tx.
Edit coredns deployment. Add this to the container after volumeMounts:
env:
- name: K8S_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Edit coredns config map. Add to bottom of the Corefile:
consul {
errors
cache 30
forward . {$K8S_NODE_IP}:8600
}
Check that DNS is working
kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot -- /bin/bash
nslookup myservice.service.consul
nslookup www.google.com
exit

Related

Kubernetes pod is unable to connect or ping the host ip it's running on. Works fine when pinging other machines on the same network as the host

I am unable to connect to my Jenkins master (running outside the cluster) from a pod running on the same machine as the Jenkins master instance.
When the pods run from another host machine, ping/connection works fines.
I'm using flannel. The only thing I can see is the this host IP address is in the cni.conf file configured in the exception list for OutBoundNAT endpoint Policy.
How can I run a Jenkins Agent pod on the same host as the Jenkins master if I cannot connect the IP of the host from the pod it's running on?
Thanks,
You need to assign the pods to that specific node.
There are so many ways to do that but you can test by using NodeSelector.
Example:
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
key1: value1
key2: value2
Which your node should have similar key and value in the labels.
To check nodes labels you can use:
kubectl get nodes --show-labels
To add a label to node:
kubectl label nodes <your-node-name> <label>
Example:
kubectl label nodes worker-2.example.com color=blue
for full and different examples you can check this link.
https://gist.github.com/devops-school/3da18faede22b18ac7013c404bc10740
Goodluck!

kubernetes pod cannot resolve local hostnames but can resolve external ones like google.com

I am trying kubernetes and seem to have hit bit of a hurdle. The problem is that from within my pod I can't curl local hostnames such as wrkr1 or wrkr2 (machine hostnames on my network) but can successfully resolve hostnames such as google.com or stackoverflow.com.
My cluster is a basic setup with one master and 2 worker nodes.
What works from within the pod:
curl to google.com from pod -- works
curl to another service(kubernetes) from pod -- works
curl to another machine on same LAN via its IP address such as 192.168.x.x -- works
curl to another machine on same LAN via its hostname such as wrkr1 -- does not work
What works from the node hosting pod:
curl to google.com --works
curl to another machine on same LAN via
its IP address such as 192.168.x.x -- works
curl to another machine
on same LAN via its hostname such as wrkr1 -- works.
Note: the pod cidr is completely different from the IP range used in
LAN
the node contains a hosts file with entry corresponding to wrkr1's IP address (although I've checked node is able to resolve hostname without it also but I read somewhere that a pod inherits its nodes DNS resolution so I've kept the entry)
Kubernetes Version: 1.19.14
Ubuntu Version: 18.04 LTS
Need help as to whether this is normal behavior and what can be done if I want pod to be able to resolve hostnames on local LAN as well?
What happens
Need help as to whether this is normal behavior
This is normal behaviour, because there's no DNS server in your network where virtual machines are hosted and kubernetes has its own DNS server inside the cluster, it simply doesn't know about what happens on your host, especially in /etc/hosts because pods simply don't have access to this file.
I read somewhere that a pod inherits its nodes DNS resolution so I've
kept the entry
This is a point where tricky thing happens. There are four available DNS policies which are applied per pod. We will take a look at two of them which are usually used:
"Default": The Pod inherits the name resolution configuration from the node that the pods run on. See related discussion for more details.
"ClusterFirst": Any DNS query that does not match the configured cluster domain suffix, such as "www.kubernetes.io", is forwarded to the upstream nameserver inherited from the node. Cluster administrators may have extra stub-domain and upstream DNS servers configured
The trickiest ever part is this (from the same link above):
Note: "Default" is not the default DNS policy. If dnsPolicy is not
explicitly specified, then "ClusterFirst" is used.
That means that all pods that do not have DNS policy set will be run with ClusterFirst and they won't be able to see /etc/resolv.conf on the host. I tried changing this to Default and indeed, it can resolve everything host can, however internal resolving stops working, so it's not an option.
For example coredns deployment is run with Default dnsPolicy which allows coredns to resolve hosts.
How this can be resolved
1. Add local domain to coreDNS
This will require to add A records per host. Here's a part from edited coredns configmap:
This should be within .:53 { block
file /etc/coredns/local.record local
This part is right after block above ends (SOA information was taken from the example, it doesn't make any difference here):
local.record: |
local. IN SOA sns.dns.icann.org. noc.dns.icann.org. 2015082541 7200 3600 1209600 3600
wrkr1. IN A 172.10.10.10
wrkr2. IN A 172.11.11.11
Then coreDNS deployment should be added to include this file:
$ kubectl edit deploy coredns -n kube-system
volumes:
- configMap:
defaultMode: 420
items:
- key: Corefile
path: Corefile
- key: local.record # 1st line to add
path: local.record # 2nd line to add
name: coredns
And restart coreDNS deployment:
$ kubectl rollout restart deploy coredns -n kube-system
Just in case check if coredns pods are running and ready:
$ kubectl get pods -A | grep coredns
kube-system coredns-6ddbbfd76-mk2wv 1/1 Running 0 4h46m
kube-system coredns-6ddbbfd76-ngrmq 1/1 Running 0 4h46m
If everything's done correctly, now newly created pods will be able to resolve hosts by their names. Please find an example in coredns documentation
2. Set up DNS server in the network
While avahi looks similar to DNS server, it does not act like a DNS server. It's not possible to setup requests forwarding from coredns to avahi, while it's possible to proper DNS server in the network and this way have everything will be resolved.
3. Deploy avahi to kubernetes cluster
There's a ready image with avahi here. If it's deployed into the cluster with dnsPolicy set to ClusterFirstWithHostNet and most importantly hostNetwork: true it will be able to use host adapter to discover all available hosts within the network.
Useful links:
Pods DNS policy
Custom DNS entries for kubernetes

How to find the url of a service in kubernetes?

I have a local kubernetes cluster on my local docker desktop.
This is how my kubernetes service looks like when I do a kubectl describe service
Name: helloworldsvc
Namespace: test
Labels: app=helloworldsvc
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"helloworldsvc"},"name":"helloworldsvc","namespace":"test...
Selector: app=helloworldapp
Type: ClusterIP
IP: 10.108.182.240
Port: http 9111/TCP
TargetPort: 80/TCP
Endpoints: 10.1.0.28:80
Session Affinity: None
Events: <none>
This service is pointing to a deployment with a web app.
My question how to I find the url for this service?
I already tried http://localhost:9111/ and that did not work.
I verified that the pod that this service points to is up and running.
URL of service is in the below format:
<service-name>.<namespace>.svc.cluster.local:<service-port>
In your case it is:
helloworldsvc.test.svc.cluster.local:9111
Get the service name: kubectl get service -n test
URL to a kubernetes service is service-name.namespace.svc.cluster.local:service-port where cluster.local is the kubernetes cluster name.
To get the cluster name: kubectl config get-contexts | awk {'print $2'}
URL to service in your case will be helloworldsvc.test.svc.cluster.local:9111
The way you are trying to do won't work as to make it available on your localhost you need to make the service available at nodeport or using port-forward or using kubectl proxy.
However, if you want dont want a node port and to check if inside the container everything works fine then follow these steps to get inside the container if it has a shell.
kubectl exec -it container-name -n its-namespace-name sh
then do a
curl localhost:80 or curl helloworldsvc.test.svc.cluster.local:9111 or curl 10.1.0.28:80
but both curl commands will work only inside Kubernetes pod and not on your localhost machine.
To access on your host machine kubectl port-forward svc/helloworldsvc 80:9111 -n test
The service you have created is of type ClusterIP which is only accessible from inside the cluster. You have two ways to access it from your desktop:
Create a nodeport type service and then access it via nodeip:nodeport
Use Kubectl port forward and then access it via localhost:forwardedport
The following url variations worked for me when in the same cluster and on the same namespace (namespace: default; though all but first should still work when services are on different namespaces):
http://helloworldsvc
http://helloworldsvc.default
http://helloworldsvc.default.svc
http://helloworldsvc.default.svc.cluster.local
http://helloworldsvc.default.svc.cluster.local:80
//
using HttpClient client = new();
string result = await client.GetStringAsync(url);
Notes:
I happen to be calling to and from an ASP.NET 6 application using HttpClient
That client I think just sets port to 80 by default, so no 80 port needs to be explicitly set to work. But I did verify for all of these it can be added or removed from the url
http only (not https, unless you configured it specially)
namespace can only be omitted in the first case (i.e. when domain / 'authority' is just the service name alone). So helloworldsvc.svc.cluster.local:80 fails with exception "Name or service not known (helloworldsvc.svc.cluster.local:80)"
If you are working with minikube , you can run the code below
minikube service --all
for specific service
minikube service service-name --url
Here is another way to get the URL of service
Enter one pod through kubectl exec
kubectl exec -it podName -n namespace -- /bin/sh
Then execute nslookup IP of service such as 172.20.2.213 in the pod
/ # nslookup 172.20.2.213
nslookup: can't resolve '(null)': Name does not resolve
Name: 172.20.2.213
Address 1: 172.20.2.213 172-20-2-213.servicename.namespace.svc.cluster.local
Or execute nslookup IP of serviceName in the pod
/ # nslookup servicename
nslookup: can't resolve '(null)': Name does not resolve
Name: 172.20.2.213
Address 1: 172.20.2.213 172-20-2-213.servicename.namespace.svc.cluster.local
Now the service URL is servicename.namespace.svc.cluster.local attached with the service port after removing IP for the output of nslookup.

How can a Kubernetes pod connect to database which is running in the same local network (outside the cluster) as the host?

I have a Kubernetes cluster (K8s) running in a physical server A (internal network IP 192.168.200.10) and a PostgreSQL database running in another physical server B (internal network IP 192.168.200.20). How can my Java app container (pod) running in the K8s be able to connect to the PostgreSQL DB in server B?
OS: Ubuntu v16.04
Docker 18.09.7
Kubernetes v1.15.4
Calico v3.8.2
Pod base image: openjdk:8-jre-alpine
I have tried following this example to create a service and endpoint
kind: Service
apiVersion: v1
metadata:
name: external-postgres
spec:
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: external-postgres
subsets:
- addresses:
- ip: 192.168.200.20
ports:
- port: 5432
And had my JDBC connection string as: jdbc:postgresql://external-postgres/MY_APPDB , but it doesn't work. The pod cannot ping server B or telnet the DB using the said internal IP or ping external-postgres service name. I do not wish to use "hostNetwork: true" or connect server B via a public IP.
Any advice is much appreciated. Thanks.
I just found out the issue is due to the K8s network conflict with the server local network (192.168.200.x)
subnet.
During the K8s cluster initialization
kubadmin init --pod-network-cidr=192.168.0.0/16
The CIDR 192.168.0.0/16 IP range must be change to something else eg. 10.123.0.0/16
And this IP range must be also changed in the calico.yaml file before applying the Calico plugin:
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.123.0.0/16"
Can now ping and telnet server B after reset and re-init the K8s cluster with the different CIDR.
I guess you can replace CALICO_IPV4POOL_CIDR without re-spawning K8s cluster via kubeadm builder tool, maybe it can be useful in some circumstances.
Remove current Calico CNI plugin installation, eg.:
$ kubectl delete -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Install Calico CNI addon, supplying CALICO_IPV4POOL_CIDR parameter with a desired value:
$ curl -k https://docs.projectcalico.org/v3.8/manifests/calico.yaml --output some_file.yaml && sed -i "s~$old_ip~$new_ip~" some_file.yaml && kubectl apply -f some_file.yaml
Re-spin CoreDNS pods:
$ kubectl delete pod --selector=k8s-app=kube-dns -n kube-system
Wait until CoreDNS pods obtain IP address from a new network CIDR pool.

How to know a Pod's own IP address from inside a container in the Pod?

Kubernetes assigns an IP address for each container, but how can I acquire the IP address from a container in the Pod? I couldn't find the way from documentations.
Edit: I'm going to run Aerospike cluster in Kubernetes. and the config files need its own IP address. And I'm attempting to use confd to set the hostname. I would use the environment variable if it was set.
The simplest answer is to ensure that your pod or replication controller yaml/json files add the pod IP as an environment variable by adding the config block defined below. (the block below additionally makes the name and namespace available to the pod)
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
Recreate the pod/rc and then try
echo $MY_POD_IP
also run env to see what else kubernetes provides you with.
Some clarifications (not really an answer)
In kubernetes, every pod gets assigned an IP address, and every container in the pod gets assigned that same IP address. Thus, as Alex Robinson stated in his answer, you can just use hostname -i inside your container to get the pod IP address.
I tested with a pod running two dumb containers, and indeed hostname -i was outputting the same IP address inside both containers. Furthermore, that IP was equivalent to the one obtained using kubectl describe pod from outside, which validates the whole thing IMO.
However, PiersyP's answer seems more clean to me.
Sources
From kubernetes docs:
The applications in a pod all use the same network namespace (same IP and port space), and can thus “find” each other and communicate using localhost. Because of this, applications in a pod must coordinate their usage of ports. Each pod has an IP address in a flat shared networking space that has full communication with other physical computers and pods across the network.
Another piece from kubernetes docs:
Until now this document has talked about containers. In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost.
kubectl describe pods <name of pod> will give you some information including the IP
kubectl get pods -o wide
Give you a list of pods with name, status, ip, node...
POD_HOST=$(kubectl get pod $POD_NAME --template={{.status.podIP}})
This command will return you an IP
The container's IP address should be properly configured inside of its network namespace, so any of the standard linux tools can get it. For example, try ifconfig, ip addr show, hostname -I, etc. from an attached shell within one of your containers to test it out.
You could use
kubectl describe pod `hostname` | grep IP | sed -E 's/IP:[[:space:]]+//'
which is based on what #mibbit suggested.
This takes the following facts into account:
hostname is set to POD's name but this might change in the future
kubectl was manually placed in the container (possibly when the image was built)
Kubernetes provides a service account credential to the container implicitly as described in Accessing the Cluster / Accessing the API from a Pod, i.e. /var/run/secrets/kubernetes.io/serviceaccount in the container
Even simpler to remember than the sed approach is to use awk.
Here is an example, which you can run on your local machine:
kubectl describe pod `<podName>` | grep IP | awk '{print $2}'
The IP itself is on column 2 of the output, hence $2 .
In some cases, instead of relying on downward API, programmatically reading the local IP address (from network interfaces) from inside of the container also works.
For example, in golang:
https://stackoverflow.com/a/31551220/6247478
Containers have the same IP with the pod they are in.
So from inside the container you can just do ip a and the IP you get is the one the pod has also.