I am setting up VerneMQ (a MQTT broker) in a cluster configuration. Therefore I am launching 4 replicas in a stateful set. Apparently VerneMQ wants to communicate with the other brokers in a cluster via DNS like this:
echo "Will join an existing Kubernetes cluster with discovery node at
${kube_pod_name}.${VERNEMQ_KUBERNETES_SUBDOMAIN}.${DOCKER_VERNEMQ_KUBERNETES_NAMESPACE}.svc.cluster.local"
Unfortunately the logs indicate that this doesn't work:
14:05:56.741 [info] Application vmq_server started on node
'VerneMQ#broker-vernemq-0.broker-vernemq.messaging.svc.cluster.local'
broker-vernemq-0 is the pod's name and broker-vernemq is the name of the statefulset. The service is configured as LoadBalancer.
The problem:
I connected to the pod broker-vernemq-1 via terminal and executed ping broker-vernemq-0 and I wondered that it is not able to resolve this hostname:
ping: unknown host broker-vernemq-0
I was under the impression that this is supposed to work?
The service must be headless for kube-dns to service domain names like that. See https://stackoverflow.com/a/46638059
Related
I have a single node Kubernetes cluster, installed using k3s on bare metal. I also run some services on the host itself, outside the Kubernetes cluster. Currently I use the external IP address of the machine (192.168.200.4) to connect to these services from inside the Kubernetes network.
Is there a cleaner way of doing this? What I want to avoid is having to reconfigure my Kubernetes pods if I decide to change the IP address of my host.
Possible magic I which existed: a Kubernetes service or IP that automagically points to my external IP (192.168.200.4) or a DNS name that points the node's external IP address.
That's what ExternalName services are for (https://kubernetes.io/docs/concepts/services-networking/service/#externalname):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: ${my-hostname}
ports:
- port: 80
Then you can access the service from withing kubernetes as my-service.${namespace}.svc.cluster.local.
See: https://livebook.manning.com/concept/kubernetes/external-service
After the service is created, pods can connect to the external service
through the external-service.default.svc.cluster.local domain name (or
even external-service) instead of using the service’s actual FQDN.
This hides the actual service name and its location from pods
consuming the service, allowing you to modify the service definition
and point it to a different service any time later, by only changing
the externalName attribute or by changing the type back to ClusterIP
and creating an Endpoints object for the service—either manually or by
specifying a label selector on the service and having it created
automatically.
ExternalName services are implemented solely at the DNS level—a simple
CNAME DNS record is created for the service. Therefore, clients
connecting to the service will connect to the external service
directly, bypassing the service proxy completely. For this reason,
these types of services don’t even get a cluster IP.
This relies on using a resolvable hostname of your machine. On minikube there's a DNS alias host.minikube.internal that is setup to resolve to an IP address that routes to your host machine, I don't know if k3s supports something similar.
Thanks #GeertPt,
With minikube's host.minikube.internal in mind I search around and found that CoreDNS has a DNS entry for each host it's running on. This only seems the case for K3S.
Checking
kubectl -n kube-system get configmap coredns -o yaml
reveals there is the following entry:
NodeHosts: |
192.168.200.4 my-hostname
So if the hostname doesn't change, I can use this instead of the IP.
Also, if you're running plain docker you can use host.docker.internal to access the host.
So to sum up:
from minikube: host.minikube.internal
from docker: host.docker.internal
from k3s: <hostname>
I am using Spring Cloud Gateway Hoxton.SR5 version. My service is registered to Consul with POD ID instead of service or cluster ID. That's fine but I would rather to register my service to consul service or cluster ID instead of POD IP address. Do you think change should be done from application.yml or spring cloud gateway has configuration for that ?
While making or tests in Kubernetes dev environment, we saw that services registered to consul with their pod IP addresses instead of the service IP address. This make problem under high traffic since Kubernetes have rolling strategy to de-register pod. E.g: Pod A will be removed and Pod B will be started. Pod A remains in the cluster (maybe just 3-4 seconds) until Pod B is live, but this 3-4 seconds time consul may try to forward a request to Pod A, but it will be removed soon. Instead, if consul uses service IP which does not change on the port restart, then problem is fixed.
Thanks
consul:
host: localhost
port: 8450
discovery:
preferIpAddress :true
I am deploying Redis and a sentinel architecture on Kubernetes.
when I work with deployments are my cluster that requires redis all is working fine.
the problem is that some services of my deployment are located on a different kubernetes cluster.
when the clients reach the redis sentinel ( which I exposed via NodePort that maps internally to 26379) they get an reply the master IP.
that actually happens is that they are getting the redis Master kubernetes IP and the internal port 6379.
as I said while working in KUbernetes that works fine since the clients can access that IP but when the a services are external it is not reachable.
I found that there is a configuration named:
cluster-announce-ip and cluster-announce-ip
I have set those values to the external IP of the cluster and the external port hoping that it will solve the problem but still no change.
I am using the formal docker image : redis:4.0.11-alpine
any help would be appreciated
I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here
I have kubernetes cluster (node01-03).
There is a service with nodeport to access a pod (nodeport 31000).
The pod is running on node03.
I can access the service with http://node03:31000 from any host. On every node I can access the service like http://[name_of_the_node]:31000. But I cannot access the service the following way: http://node01:31000 even though there is a listener (kube-proxy) on node01 at port 31000. The iptables rules look okay to me. Is this how it's intended to work ? If not, how can I further troubleshoot?
NodePort is exposed on every node in the cluster. https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport clearly says:
each Node will proxy that port (the same port number on every Node) into your Service
So, from both inside and outside the cluster, the service can be accessed using NodeIP:NodePort on any node in the cluster and kube-proxy will route using iptables to the right node that has the backend pod.
However, if the service is accessed using NodeIP:NodePort from outside the cluster, we need to first make sure that NodeIP is reachable from where we are hitting NodeIP:NodePort.
If NodeIP:NodePort cannot be accessed on a node that is not running the backend pod, it may be caused by the default DROP rule on the FORWARD chain (which in turn is caused by Docker 1.13 for security reasons). Here is more info about it. Also see step 8 here. A solution for this is to add the following rule on the node:
iptables -A FORWARD -j ACCEPT
The k8s issue for this is here and the fix is here (the fix should be there in k8s 1.9).
Three other options to enable external access to a service are:
ExternalIPs: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
LoadBalancer with an external, cloud-provider's load-balancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
If accessing pods within the Kubernetes cluster, you dont need to use the nodeport. Infer the Kubernetes service targetport instead. Say podA needs to access podB through service called serviceB. All you need assuming http is http://serviceB:targetPort/