I want to use the dns system of Google Kubernetes to make one pod (a web-backend) connect to another pod and service (in this case Redis).
When I check the DNS in the cluster, I get this:
[ root#curl:/ ]$ nslookup redis-service
Server: 10.40.0.10
Address 1: 10.40.0.10 kube-dns.kube-system.svc.cluster.local
Name: redis-service
Address 1: 10.40.2.59 redis-service.default.svc.cluster.local
[ root#curl:/ ]$
In my application, I would set the REDIS_HOST url to redis-service.default.svc.cluster.local.
Unfortunatedly, the logs say it cannot connect:
(also with http:// in front).
Do I miss a setting to make these pods able to communicate using this address? This address is predicable, that is why.
I found 2 things that work:
Change the service to ClusterIP instead of NodePort.
#harshmanvar in the comments mentioned, when just using the servicename it also works.
I am not exactly sure why, would like to understand this behaviour.
Related
I'm running colima with kubernetes like:
colima start --kuberenetes
I created a few running pods, and I want to see access the through the browsers.
But I don't know what is the colima IP (or kubernetes node IP).
help appreciated
You can get the nodeIp so:
kubectl get node
NAME STATUS ROLES AGE VERSION
nodeName Ready <none> 15h v1.26.0
Then with the nodeName:
kubectl describe node nodeName
That gives you a descrition of the node and you should look for this section:
Addresses:
InternalIP: 10.165.39.165
Hostname: master
Ping it to verify the network.
Find your host file on Mac and make an entry like:
10.165.39.165 test.local
This let you access the cluster with a domain name.
Ping it to verify.
You can not access from outside the cluster a ClusterIp.
To access your pod you have several possibilities.
if your service is type ClusterIp, you can create a temporary connection from your host with a port forward.
kubectl port-forward svc/yourservicename localport:podport
(i would raccomend this) create a service type: NodePort
Then
kubectl get svc -o wide
Shows you the NodePort: between(30000-32000).
You can access now the Pod by: test.local:nodePort or Ipaddress:NodePort.
Note: If you deployed in a namespace other than default, add -n yournamespace in the kubectl commands.
Update:
if you want to start colima with an ipAddress, first find one of your local network which is available.
Your network setting you can get with:
ifconfig
find the network. Should be the same of that of your Internet router.
Look for the subnet. Most likely 255.255.255.0.
The value to pass then:
--network-address xxx.xxx.xxx.xxx/24
In case the subnet is 255.255.0.0 then /16. But i dont think, if you are connect from home. Inside a company however this is possible.
Again check with ping and follow the steps from begining to verify the kubernetes node configuration.
I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>
I am using a baremetal cluster of 1 master and 2 nodes on premise in my home lab with istio, metallb and calico.
I want to create a DNS server in kubernetes that translates IPs for the hosts on the LAN.
Is it possible to use the coreDNS already installed in k8s?
Yes, it's possible but there are some points to consider when doing that. Most of them are described in the Stackoverflow answer below:
Stackoverflow.com: Questions: How to expose Kubernetes DNS externally
For example: The DNS server would be resolving the queries that are internal to the Kubernetes cluster (like nslookup kubernetes.default.svc.cluster.local).
I've included the example on how you can expose your CoreDNS to external sources and add a Service that would be pointing to some IP address
Steps:
Modify the CoreDNS Service to be available outside.
Modify the configMap of your CoreDNS accordingly to:
CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Test
Modify the CoreDNS Service to be available outside.
As you are new to Kubernetes you are probably aware on how Services work and which can be made available outside. You will need to change your CoreDNS Service from ClusterIP to either NodePort or LoadBalancer (I'd reckon LoadBalancer would be a better idea considering the metallb is used and you will access the DNS server on a port: 53)
$ kubectl edit --namespace=kube-system service/coredns (or kube-dns)
A side note!
CoreDNS is using TCP and UDP simultaneously, it could be an issue when creating a LoadBalancer. Here you can find more information on it:
Metallb.universe.tf: Usage (at the bottom)
Modify the configMap of your CoreDNS
If you would like to resolve domain like for example: example.org you will need to edit the configMap of CoreDNS in a following way:
$ kubectl edit configmap --namespace=kube-system coredns
Add the line to the Corefile:
k8s_external example.org
This plugin allows an additional zone to resolve the external IP address(es) of a Kubernetes service. This plugin is only useful if the kubernetes plugin is also loaded.
The plugin uses an external zone to resolve in-cluster IP addresses. It only handles queries for A, AAAA and SRV records; all others result in NODATA responses. To make it a proper DNS zone, it handles SOA and NS queries for the apex of the zone.
-- CoreDNS.io: Plugins: K8s_external
Create a Service that is pointing to external device.
Following on the link that I've included, you can now create a Service that will point to an IP address:
apiVersion: v1
kind: Service
metadata:
name: test
namespace: default
spec:
clusterIP: None
externalIPs:
- 192.168.200.123
type: ClusterIP
Test
I've used minikube with --driver=docker (with NodePort) but I'd reckon your can use the ExternalIP of your LoadBalancer to check it:
dig #192.168.49.2 test.default.example.org -p 32261 +short
192.168.200.123
where:
#192.168.49.2 - IP address of minikube
test.default.example.org - service-name.namespace.k8s_external_domain
-p 32261 - NodePort port
+short - to limit the output
Additional resources:
Linux.die.net: Man: Dig
I'm deploying a nodejs application into a kubernetes cluster. This application needs access to an external database which is public available under db.external-service.com. For this purpose a service of the type ExternalName is created.
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.external-service.com
In the deployment an environment variable which provides the database hostname for the application is set to the name of this service.
env:
- name: DB_HOST
value: postgres
The problem is that when the nodejs application try to connect to the database ends up with this error message.
Error: getaddrinfo ENOTFOUND postgres
Already tried to use the full hostname postgres.<my-namespace>.svc.cluster.local without success.
What cloud be wrong with this setup?
EDIT:
It works if I use directly the plain ip address behind db.external-service.com in my pod configuration
It dose not work if I use the hostname directly in my pod configuration
I can ping the hostname with one of my pods: kubectl exec my-pod-xxx -- ping db.external-service.com has the right ip address
It turns out that the Kubernetes worker nodes are not on the allow list from the database. So the connection timed out.
Seems like your Pod is not able to resolve DNS db.external-service.com to an IP Address.
In Kubernetes, Pods use CoreDNS Pods to resolve Service Names to Service IP Addresses.
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
If CoreDNS Pods are not able to resolve the DNS to IP Address it is supposed to redirect the request to the Nameserver configured in the Host/VM/Node resolv.conf because dnsPolicy for CoreDNS Pods is Default. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy
So what is the dnsPolicy of your Pod ?
Are you able to resolve DNS db.external-service.com to an IP Address from the Host/VM/Node on which the CoreDNS Pod is running on ?
I run the CoreOS k8s cluster on Mac OSX, which means it's running inside VirtualBox + Vagrant
I have in my service.yaml file:
spec:
type: NodePort
When I type:
kubectl get services
I see:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR
kubernetes 10.100.0.1 <none> 443/TCP <none>
my-frontend 10.100.250.90 nodes 8000/TCP name=my-app
What is the "nodes" external IP? How do I access my-frontend externally?
In addition to "NodePort" types of services there are some additional ways to be able to interact with kubernetes services from outside of cluster:
Use service type "LoadBalancer". It works only for some cloud providers and will not work for virtualbox, but I think it will be good to know about that feature. Link to the documentation
Use one of the latest features called "ingress". Here is description from manual "An Ingress is a collection of rules that allow inbound connections to reach the cluster services. It can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.". Link to the documentation
If kubernetes is not strict requirements and you can switch to latest openshift origin (which is "kubernetes on steroids") you can use origin feature called "router".
Information about openshift origin.
Information about openshift origin routes
I assume you are using MiniKube for Kubernetes. In such case, to identify your node ip address, use the following command:
.\minikube.exe ip
If the exposed service is of type=Nodeport, to check the exposed port use the following command:
.\kubectl.exe describe service <service-name>
Check for Node port in the result. Also, if you want to have all these details via nice UI, then you can launch the Kubernetes Dashboard present at the following address:
<Node-ip>:30000
The easiest way to get the host ports is kubectl describe services my-frontend.
The node port will be displayed.
Also you can check the api:
api/v1/namespaces/{namespace_name}/services/{service_name}
or list all:
api/v1/namespaces/default/services
Last, you can chose a fixed nodePort in the service.yml
Here is the doc on node addresses: http://kubernetes.io/docs/admin/node/#addresses
You can specify the port number of nodePort when you specify the service. If you didn't manually specify a port, system will allocate one for you. You can kubectl get services -o yaml and find the port at spec.ports[*].nodePort, as suggested in the doc here: https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-nodeport
And you can access your front-end at {nodes' external addresses}:{nodePort}
Hope this helps.