How to configure Rancher so that internal DNS resolves to custom address - kubernetes

I am new to Rancher 2.0 and I have a set of pods running in a 2 node cluster. Some of the pods need reach an internal DNS entry that will resolve to a port on some of the other pods with a custom address like http://mypod:443 or wss://mypod:443. How can I go about achieving this?

Related

Service external IP pending on kubernetes hosted on jelastic

I have installed my kubernetes cluster on Jelastic. Now, I tried to define a service of LoadBalancer type and would like it to be provided with an external IP. The external IP is currently marked as pending. What should I do to make it non-pending? Do I have to provide the worker nodes with an external IPv4?
In my current setup, my worker nodes have no IPv4 because I put an nginx load-balancer in front of the cluster:
The IPv4 is set on the nginx node. Is that a problem? If I want to access my loadbalancer service inside of my kubernetes cluster, what should I do?
For LoadBalancer service type to work, the cloud provider must implemenet the relevant APIs to get it to work.
With regard to Jelastic, as per their docs, they don't support it https://docs.jelastic.com/kubernetes-exposing-services/:
Jelastic PaaS does not support the LocaBalancer service type currently.
In Jelastic Public IP addresses have to be attached to worker nodes.
Every worker node has ingress controller instance running (based oт nginx/haproxy/traefik) with http/https listeners that can forward traffic to the required service.
You have just to bind your domain as CNAME to Environment FQDN and every your worker node can accept requests in RR-DNS mode.
Does this scenario works for you or you have a specific requirement to use external load balancer?
By default, when Public IPs are not attached to worker instances the traffic is going through the Shared Load Balancer.
P.S. If you install Certification Manager Addon to your K8s cluster - you can also issue free Let's Encrypt certificates.

Kubernetes, How to enable inter-pod DNS within a same Deployment?

I am new to Kubernetes, and I am trying to make inter-pod communication over DNS to work.
Pods in My k8s are spawned using Deployments. My problem is all the Pods report its hostname to Zookeeper, and pods use those hostnames found in Zookeeper to ping the other peers. It always fail because the peer's hostnames are unresolvable between pods.
The only solution now is to manually add each pod's hostname to peer's /etc/hosts file. But this method would not endure to work for large clusters.
If there is a DNS solution for inter-pod communication, that keeps a record of any newly generated pods, and delete dead pods, will be great.
Thanks in advance.
One solution I had found was to add hostname and subdomain under spec->template->spec-> , then the communication over hostnames between each pod is successful.
However, this solution is fairly dumb, because I cannot set the replicas for each Deployment to more than 1, or I will get more than 1 pod with same hostname in the cluster. If I have 10 slave nodes with same function in a cluster, I will need to create 10 Deployments.
Any better solutions?
You need to use a service definition pointing to your pods
https://kubernetes.io/docs/concepts/services-networking/service/
With that you have a balanced proxy to control the inter-pod communications and the internal DNS on Kubernetes takes care of that service instead of each pod no matter the state of the pod.
If that simples solution didn't fit your needs you can substitute kubedns as the default internal DNS by using coreDNS.
https://coredns.io/

Change kubernetes --service-cluster-ip-range after the cluster is initialized

After initializing kubernetes multimaster cluster I realized that my --service-cluster-ip-range overlaps with actual hosts subnet. A lot of services IPs are overlaping with actual kube node hosts IPs. Now because of that I see a lot of issues in kubedns pods like below:
getsockopt: no route to host
My LAN is: 10.100.0.0/24
Kube service subnet is: 10.96.0.0/12
Now I want to change this in the kube-api pods yamls after removing all the services, but it won't allow me saying that specific section is not a subject o be changed. Is there a way to fix this?

Kubernetes IP service IP and ports

I've deployed a hello-world application on my Kubernetes cluster. When I access the app via <cluster ip>:<port> in my browser I get the following webpage: hello-kuleuven app webpage.
I understand that from outside the cluster you have to access the app via the cluster IP and the port specified in the deployment file (which in my case is 30001). From inside the cluster you have to contact the master node with its local IP and another port number, in my case 10.111.152.164:8080.
My question is about the last line of the webpage:
Kubernetes listening in 443 available at tcp://10.96.0.1:443
Since, the service is already accessible from inside and outside the cluster by other ports and IP's, I'm not sure what this does.
The IP 10.96.0.1 is a cluster IP of kube-dns service. You can see it using
kubectl get svc -n kube-apiserver
Kubernetes DNS schedules a DNS Pod and Service on the cluster, and configures the kubelets to tell individual containers to use the DNS Service’s IP to resolve DNS names.
So every pod you deploy uses kube-dns service (ClusterIP 10.96.0.1) to resolve the dns names.
Read more about kube dns at kubernetes official document here

Kubernetes custom domain name

I'm running Kuberentes with a Minikube node on my machine. The pods are accessing each other by their .metadata.name, and I would like to have a custom domain to that name.
i.e. one pod accesses Elastic's machine by elasticsearch.blahblah.com
Thanks for any suggestions
You should have DNS records for pods by default due to kube-DNS addon enabled by default in minikube.
To check kube-dns addon status use the below command:
kubectl get pod -n kube-system
Please find below how cluster add-on DNS server works:
An optional (though strongly recommended) cluster add-on is a DNS server. The DNS server watches the Kubernetes API for new Services and creates a set of DNS records for each. If DNS has been enabled throughout the cluster then all Pods should be able to do name resolution of Services automatically.
For example, if you have a Service called "my-service" in Kubernetes Namespace "my-ns" a DNS record for "my-service.my-ns" is created. Pods which exist in the "my-ns" Namespace should be able to find it by simply doing a name lookup for "my-service". Pods which exist in other Namespaces must qualify the name as "my-service.my-ns". The result of these name lookups is the cluster IP.
Kubernetes also supports DNS SRV (service) records for named ports. If the "my-service.my-ns" Service has a port named "http" with protocol TCP, you can do a DNS SRV query for "_http._tcp.my-service.my-ns" to discover the port number for "http".
The Kubernetes DNS server is the only way to access services of type ExternalName.
You can follow Configure DNS Service document for configuration instructions.
Also, you can check DNS for Services and Pods for additional information.