How to setup an ETCD cluster in Kubernetes using DNS discovery (SRV)? - kubernetes

I am looking to have a dynamic etcd cluster running inside my k8s cluster. The best way I can think of doing it dynamically (no hardcoded addresses, names, etc.) is to use DNS discovery, with the internal k8s DNS (CoreDNS).
I find detached information about SRV records created for services in k8s, and some explanations on how etcd DNS discovery works, but no complete howto.
For example:
how does k8s name SRV entries?
should they be named with a specific way for etcd to be able to find them?
should any special CoreDNS setting be set?
Any help on that would be greatly appreciated.
references:
https://coreos.com/etcd/docs/latest/v2/clustering.html#dns-discovery
https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

how does k8s name SRV entries?
via the Service.port[].name, which is why almost everything in kubernetes has to be a DNS-friendly name: because a lot of times, it does put them in DNS for you.
A Pod that has dig or a new enough nslookup will then show you:
$ dig SRV kubernetes.default.svc.cluster.local.
and you'll see the names of the ports that the kubernetes Service is advertising.
should they be named with a specific way for etcd to be able to find them?
Yes, as one can see in the page you linked to, they need to be named one of these four:
_etcd-client
_etcd-client-ssl
_etcd-server
_etcd-server-ssl
so something like this on the kubernetes side:
ports:
- name: etcd-client
port: 2379
containerPort: whatever
- name: etcd-server
port: 2380
containerPort: whatever

Related

How to set DNS entrys & network configuration for kubernetes cluster #home (noob here)

I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!

How to access from outside a RabbitMQ Kubernetes cluster using rabbitmqctl?

I've a RabbitMQ cluster running on a Kubernetes environment. I don't have access to the containers shell, so I'm trying to run rabbitmqctl from a local container (same image).
These ports are exposed:
- 15672 (exposed as 32672)
- 5671 (exposed as 32671)
- 4369 (exposed as 32369)
- 25672 (exposed as 32256)
The correct cookie is on $HOME/.erlang.cookie on my local container.
How to specify the cluster URL and port to rabbitmqctl, so I can access the RabbitMQ cluster from outside?
Is it necessary to expose other ports?
Is it even possible to do this, since I can't find any reference to this on documentation?
You will want to expose ports 4369 and 25672 using the same port numbers externally as I can't think of a way to tell the Erlang VM running rabbitmqctl to use a different port for EPMD lookups. You should also expose 35672-35682 using the same port range externally.
Since you're using Kubernetes, I'll assume that you are using long names. Assume that, within your container, your node name is rabbit#container1.my.org, to access it externally use this command:
rabbitmqctl -l -n rabbit#container1.my.org
Please note that container1.my.org must resolve via DNS to the correct IP address to connect to that container.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on Stack Overflow.

How to expose Kubernetes DNS externally

Is it possible for an external DNS server to resolve against the K8s cluster DNS? I want to have applications residing outside of the cluster be able to resolve the container DNS names?
It's possible, there's a good article proving the concept: https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7
However, I agree with Dan that exposing via service + ingress/ELB + external-dns is a common way to solve this. And for dev purposes I use https://github.com/txn2/kubefwd which also hacks name resolution.
Although it may be possible to expose coredns and thus forward requests to kubernetes, the typical approach I've taken, in aws, is to use the external-dns controller.
This will sync services and ingresses with provides like aws. It comes with some caveats, but I've used it successfully in prod environments.
coredns will return cluster internal IP addresses that are normally unreachable from outside the cluster. The correct answer is the deleted by MichaelK suggesting to use coredns addon k8s_external https://coredns.io/plugins/k8s_external/ .
k8s_external is already part of coredns. Just edit with
kubectl -n kube-system edit configmap coredns and add k8s_external after kubernetes directive per docs.
kubernetes cluster.local
k8s_external example.org
k8s_gateway also handles dns for ingress resources
https://coredns.io/explugins/k8s_gateway/
https://github.com/ori-edge/k8s_gateway (includes helm chart)
You'll also want something like metallb or rancher/klipper-lb handling services with type: LoadBalancer as k8s_gateway won't resolve NodePort services.
MichaelK is the author of k8s_gateway not sure why his reply is deleted by moderator.
I've never done that, but technically this should be possible by exposing kube-dns service as NodePort. Then you should configure your external DNS server to forward queries for Kube DNS zone "cluster.local" (or any other you have in Kube) to kube-dns address and port.
In Bind that can be done like that:
zone "cluster.local" {
type forward;
forward only;
forwarders{ ANY_NODE_IP port NODEPORT_PORT; };
};

Routing internal traffic in Kubernetes?

We presently have a setup where applications within our mesos/marathon cluster want to reach out to services which may or may not reside in our mesos/marathon cluster. Ingress for external traffic into the cluster is accomplished via an Amazon ELB sitting in front of a cluster of Traefik instances, which then chooses the appropriate set of container instances to load-balance to via the incoming HTTP Host header compared against essentially a many-to-one association of configured host headers against a particular container instance. Internal-to-internal traffic is actually handled by this same route as well, as the DNS record that is associated with a given service is mapped to that same ELB both internal to and external to our mesos/marathon cluster. We also give the ability to have multiple DNS records pointing against the same container set.
This setup works, but causes seemingly unnecessary network traffic and load against our ELBs as well as our Traefik cluster, as if the applications in the containers or another component were able to self-determine that the services they wished to call out to were within the specific mesos/marathon cluster they were in, and make an appropriate call to either something internal to the cluster fronting the set of containers, or directly to the specific container itself.
From what I understand of Kubernetes, Kubernetes provides the concept of services, which essentially can act as the front for a set of pods based on configuration for which pods the service should match over. However, I'm not entirely sure of the mechanism by which we can have applications in a Kubernetes cluster know transparently to direct network traffic to the service IPs. I think that some of this can be helped by having Envoy proxy traffic meant for, e.g., <application-name>.<cluster-name>.company.com to the service name, but if we have a CNAME that maps to that previous DNS entry (say, <application-name>.company.com), I'm not entirely sure how we can avoid exiting the cluster.
Is there a good way to solve for both cases? We are trying to avoid having our applications' logic have to understand that it's sitting in a particular cluster and would prefer a component outside of the applications to perform the routing appropriately.
If I am fundamentally misunderstanding a particular component, I would gladly appreciate correction!
When you are using service-to-service communication inside a cluster, you are using Service abstraction which is something like a static point which will road traffic to the right pods.
Service endpoint available only from inside a cluster by it's IP or internal DNS name, provided by internal Kubernetes DNS server. So, for communicating inside a cluster, you can use DNS names like <servicename>.<namespace>.svc.cluster.local.
But, what is more important, Service has a static IP address.
So, now you can add that static IP as a hosts record to the pods inside a cluster for making sure that they will communicate each other inside a cluster.
For that, you can use HostAlias feature. Here is an example of configuration:
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "10.0.1.23"
hostnames:
- "my.first.internal.service.example.com"
- ip: "10.1.2.3"
hostnames:
- "my.second.internal.service.example.com"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
So, if you will use your internal Service IP in combination with service's public FQDN, all traffic from your pod will be 100% inside a cluster, because the application will use internal IP address.
Also, you can use upstream DNS server which will contain same aliases, but an idea will be the same.
With Upstream DNS for the separate zone, resolving will work like that:
With a new version of Kubernetes, which using Core DSN for providing DNS service, and has more features it will be a bit simpler.

I just want to run a simple app in Kubernetes

I have a docker image that serves a simple static web page.
I have a working Kubernetes cluster of 4 nodes (physical servers not in the cloud anywhere).
I want to run that docker image on 2 of the 4 Kubernetes nodes and have it be accessible to the world outside the cluster and load balanced and have it move it to another node if one dies.
Do I need to make a pod then a replication controller then a kube proxy something?
Or do I need to just make a replication controller and expose it somehow?
Do I need to make service?
I don't need help with how to make any of those things, that seems well documented, but what I can't tell what I need to make.
What you need is to expose your service (that consists of pods which are run/scaled/restarted by your replication controller). Using deployment instead of replication controller has additional benefits (mainly for updating the app).
If you are on bare metal then you probably wish to expose your service via type: NodePort - so every node in your cluster will open a static port that routes traffic to pods.
You can then either point your load balancer to that nodes on that port, or make a DNS entry with all Kubernetes nodes.
Docs: http://kubernetes.io/docs/user-guide/quick-start/
You'll need:
1) A load balancer on one of your nodes in your cluster, that is a reverse proxy Pod like nginx to proxy the traffic to an upstream.
This Pod will need to be exposed to the outside using hostPort like
ports:
- containerPort: 80
hostPort: 80
name: http
- containerPort: 443
hostPort: 443
name: https
2) A Service that will use the web server selector as target.
3) Set the Service name (which will resolve to the Service IP) as the upstream in nginx config
4) Deploy your web server Pods, which will have the selector to be targeted by the Service.
You might also want to look at External IP for the Service
http://kubernetes.io/docs/user-guide/services/#external-ips
but I personally never managed to get that working on my bare metal cluster.