Clean way to connect to services running on the same host as the Kubernetes cluster - kubernetes

I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.

That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.

Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>

Related

DNS server in kubernetes for translate LAN hosts

I am using a baremetal cluster of 1 master and 2 nodes on premise in my home lab with istio, metallb and calico.
I want to create a DNS server in kubernetes that translates IPs for the hosts on the LAN.
Is it possible to use the coreDNS already installed in k8s?
Yes, it's possible but there are some points to consider when doing that. Most of them are described in the Stackoverflow answer below:
Stackoverflow.com: Questions: How to expose Kubernetes DNS externally
For example: The DNS server would be resolving the queries that are internal to the Kubernetes cluster (like nslookup kubernetes.default.svc.cluster.local).
I've included the example on how you can expose your CoreDNS to external sources and add a Service that would be pointing to some IP address
Steps:
Modify the CoreDNS Service to be available outside.
Modify the configMap of your CoreDNS accordingly to:
CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Test
Modify the CoreDNS Service to be available outside.
As you are new to Kubernetes you are probably aware on how Services work and which can be made available outside. You will need to change your CoreDNS Service from ClusterIP to either NodePort or LoadBalancer (I'd reckon LoadBalancer would be a better idea considering the metallb is used and you will access the DNS server on a port: 53)
$ kubectl edit --namespace=kube-system service/coredns (or kube-dns)
A side note!
CoreDNS is using TCP and UDP simultaneously, it could be an issue when creating a LoadBalancer. Here you can find more information on it:
Metallb.universe.tf: Usage (at the bottom)
Modify the configMap of your CoreDNS
If you would like to resolve domain like for example: example.org you will need to edit the configMap of CoreDNS in a following way:
$ kubectl edit configmap --namespace=kube-system coredns
Add the line to the Corefile:
k8s_external example.org
This plugin allows an additional zone to resolve the external IP address(es) of a Kubernetes service. This plugin is only useful if the kubernetes plugin is also loaded.
The plugin uses an external zone to resolve in-cluster IP addresses. It only handles queries for A, AAAA and SRV records; all others result in NODATA responses. To make it a proper DNS zone, it handles SOA and NS queries for the apex of the zone.
-- CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Following on the link that I've included, you can now create a Service that will point to an IP address:
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: None
externalIPs:
- 192.168.200.123
type: ClusterIP
Test
I've used minikube with --driver=docker (with NodePort) but I'd reckon your can use the ExternalIP of your LoadBalancer to check it:
dig #192.168.49.2 test.default.example.org -p 32261 +short
192.168.200.123
where:
#192.168.49.2 - IP address of minikube
test.default.example.org - service-name.namespace.k8s_external_domain
-p 32261 - NodePort port
+short - to limit the output
Additional resources:
Linux.die.net: Man: Dig

Kubernetes service with external name not discoverable

I'm deploying a nodejs application into a kubernetes cluster. This application needs access to an external database which is public available under db.external-service.com. For this purpose a service of the type ExternalName is created.
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.external-service.com
In the deployment an environment variable which provides the database hostname for the application is set to the name of this service.
env:
- name: DB_HOST
value: postgres
The problem is that when the nodejs application try to connect to the database ends up with this error message.
Error: getaddrinfo ENOTFOUND postgres
Already tried to use the full hostname postgres.<my-namespace>.svc.cluster.local without success.
What cloud be wrong with this setup?
EDIT:
It works if I use directly the plain ip address behind db.external-service.com in my pod configuration
It dose not work if I use the hostname directly in my pod configuration
I can ping the hostname with one of my pods: kubectl exec my-pod-xxx -- ping db.external-service.com has the right ip address
It turns out that the Kubernetes worker nodes are not on the allow list from the database. So the connection timed out.
Seems like your Pod is not able to resolve DNS db.external-service.com to an IP Address.
In Kubernetes, Pods use CoreDNS Pods to resolve Service Names to Service IP Addresses.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
If CoreDNS Pods are not able to resolve the DNS to IP Address it is supposed to redirect the request to the Nameserver configured in the Host/VM/Node resolv.conf because dnsPolicy for CoreDNS Pods is Default. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
So what is the dnsPolicy of your Pod ?
Are you able to resolve DNS db.external-service.com to an IP Address from the Host/VM/Node on which the CoreDNS Pod is running on ?

access postgres in kubernetes from an application outside the cluster

Am trying to access postgres db deployed in kubernetes(kubeadm) on centos vms from another application running on another centos vm. I have deployed postgres service as 'NodePort' type. My understanding is we can deploy it as LoadBalancer type only on cloud providers like AWS/Azure and not on baremetal vm. So now am trying to configure 'ingress' with NodePort type service. But am still unable to access my db other than using kubectl exec $Pod-Name on kubernetes master.
My ingress.yaml is
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: postgres-ingress
spec:
backend:
serviceName: postgres
servicePort: 5432
which does not show up any address as below
NAME HOSTS ADDRESS PORTS AGE
postgres-ingress * 80 4m19s
am not even able to access it from pgadmin on my local mac. Am I missing something?
Any help is highly appreciated.
Ingress won't work, it's only designed for HTTP traffic, and the Postgres protocol is not HTTP. You want solutions that deal with just raw TCP traffic:
A NodePort service alone should be enough. It's probably the simplest solution. Find out the port by doing kubectl describe on the service, and then connect your Postgres client to the IP of the node VM (not the pod or service) on that port.
You can use port-forwarding: kubectl port-forward pod/your-postgres-pod 5432:5432, and then connect your Postgres client to localhost:5432. This is my preferred way for accessing the database from your local machine (it's very handy and secure) but I wouldn't use it for production workloads (kubectl must be always running so it's somewhat fragile and you don't get the best performance).
If you do special networking configuration, it is possible to directly access the service or pod IPs from outside the cluster. You have to route traffic for the pod and service CIDR ranges to the k8s nodes, this will probably involve configuring your VM hypervisors, routers and firewalls, and is highly dependent on what networking (CNI) plugin are you using for your Kubernetes cluster.

Accessing GCP Internal Load Balancer from another region

I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.
I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).
Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.
Workarounds are welcomed!
In the release notes from GCP, it is stated that:
Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".
UPDATE: Below service works for GKE v1.16.x & newer versions:
apiVersion: v1
kind: Service
metadata:
name: ilb-global
annotations:
# Required to assign internal IP address
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
For GKE v1.15.x and older versions:
Accessing internal load balancer IP from a VM sitting in a different region will not work. But this helped me to make the internal load balancer global.
As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.
Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:
# COMMAND:
kubectl get services/ilb-global
# OUTPUT:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ilb-global LoadBalancer 10.0.12.12 10.123.4.5 80:32400/TCP 18m
Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:
# COMMAND:
kubectl get service/ilb-global \
-o jsonpath='{.status.loadBalancer.ingress[].ip}'
# OUTPUT:
10.123.4.5
GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:
# COMMAND:
gcloud compute forwarding-rules list | grep 10.123.4.5
# OUTPUT
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
a26cmodifiedb3f8252484ed9d0192 asia-south1 10.123.4.5 TCP asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019
NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.
Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):
# COMMAND:
gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
--region asia-south1 --allow-global-access
# OUTPUT:
Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].
And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).
Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.
First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.
Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:
Creating a Service of type LoadBalancer, so your cluster can be reachable through and external (public) IP by exposing this service. If you are worried about the security, you can use Istio to enforce access policies for example.
Or, to create an HTTP(S) load balancing with Ingress, so your cluster can be reachable through its external (public) IP. Where again, for security purposes you can use GCP Cloud Armor which actually so far works only for HTTP(S) Load Balancing.

Kubernetes IP service IP and ports

I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here