Kubernetes with Metallb and Traefik behind OpenVPN - kubernetes

I have a Kubernetes cluster installed on a bare metal server, I have installed Metallb for external load balancer and Traefik for reverse proxy engine, this cluster is behind an OpenVPN with subnet 10.1.0.0/24, the ip for the server is 10.1.0.1
For the Metallb I assign ip 10.1.1.0/24 for the pool, so the Traefik LoadBalancer ip is 10.1.1.1
I also have my own Domain Name Server that will be pushed to the server when connected to the VPN
If I create a domain for one of my app inside the Kubernetes Cluster, to what IP should I point my Domain so that I can access my app through the domain from other server that are also connected to the VPN?
I think I misconfigured something but I got stuck

You need to point the domain to the Traefik Loadbalancer IP, which is ‘10.1.1.1’ in your case. The IP address which you have shared is an external IP provided by Metallb.
When the client connects to the application using the domain; the DNS resolution process will resolve the domain to the Traefik LB IP. The traffic will be forwarded to the Traefik LB IP, which will route the traffic to the appropriate service and the pod in your Kubernetes cluster based on the rules defined in your Trafik. Please check this Here is the blog posted by Peter Gillich for your reference.

Related

Real IP (Domains and Subtomains) on Bare Metal Cluster with MatalLB and Ingress

help me figure it out.
I have a Bare Metal Kubernetes cluster with three nodes, each node has a public ip.
I have installed MetalLB and IngressController.
It is not clear to me which IP should I redirect domains and subdomains to so that they can be resolved by the Ingress Controller?
I need to initially define on which node the Ingress Controller will be launched?
I need to install the Ingress Controller, and then look at the worker node, on which it will be installed and send all domains or subdomains there?
What happens if, after restarting the cluster, the ingress controller will be deployed on another node?
All the tutorials I've seen show how it works locally or with a cloud load balancer.
Help me understand how this should work correctly.
Usually, when you install MetalLB, you configure a pool of addresses which can be used to assign new IPs at LoadBalancer services whenever they are created. Such IP addresses need to be available, they cannot be created out of nothing of course.. they could be in lease from your hosting provider for example.
If instead you have a private Bare Metal cluster which serves only your LAN network, you could just select a private range of IP addresses which are not used.
Then, once MetalLB is running, what happens is the following:
Someone / something creates a LoadBalancer services (an HELM Chart, a user with a definition, with commands, etc)
The newly created service needs an external IP. MetalLB will select one address from the configured selected range and assign it to that service
MetalLb will start to announce using standard protocol that the IP address can now be reached by contacting the cluster, it can work either in Layer2 mode (one node of the cluster holds that additional IP address) or BGP (true load balancing across all nodes of the cluster)
From that point, you can just reach the new service by contacting this newly assigned IP address (which is NOT the ip of any of the cluster nodes)
Usually, the Ingress Controller will just bring a LoadBalancer service (which will grab an external IP address from MetalLb) and then, you can reach hte Ingress Controller from that IP.
As for your other questions, you don't need to worry about where the Ingress Controller is running or similar, it will be automatically handled.
The only thing you may want to do is to make the domain names which you want to serve point to the external IP address assigned to the Ingress Controller.
Some docs:
MetalLB explanations
Bitnami MetalLB chart
LoadBalancer service docs
As an alternative (especially when you want "static" ip addresses) I should mention HAProxy, installed external to kubernetes cluster in a bare_server/vm/lxc_container/etc. and configured to send all incoming 80/433 traffic to the NodePort of ingress controller on all kubernetes workers (if no ingress pod is running on that worker traffic will be forwarded by kubernetes).
Of course, nowadays ip addresses are also "cattle", not "pets" anymore, so MetalLB is more of a "kubernetish" solution, but who knows ...
This is the link describing HAProxy solution (I am not affiliated with the author):
https://itnext.io/bare-metal-kubernetes-with-kubeadm-nginx-ingress-controller-and-haproxy-bb0a7ef29d4e

How to build the network architecture for a kubernetes raspberry cluster?

I want to deploy a website on my kubernetes cluster.
I followed this guide to set up my kubernetes cluster on my set of raspberries. Now I have tested it with some nginx containers and it works to a certain degree since I need to find the correct ip of the machine it is deployed on.
Now that I have a signed up a domain I like to forward the traffic to my deployed website on my kubernetes cluster.
I have done this before with nginx, certbot and letsencrypt without containerisation. Now I am just missing the part how kubernetes handles the network. I assumed it was similar to swarms network which forwards all the request to the correct machine. But kubernetes does it differently.
TLDNR: How to deploy a website on a self build raspberry pi kubernetes cluster?
You need to create Kubernetes Service (documentation) to expose the web service to the outside world.
There are two types of Services relevant to deployments outside of cloud providers:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service
routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So what you probably want is a NodePort service, which will expose the service on some fixed port on each of your Nodes (documentation and examples)

Kubernetes static ip ingress on a private cloud

We are trying to build a Kubernetes node on our Private VMware infrastructure. I have the cluster up and running and and ingress running, however I can't figure out how to route traffic to the ingress.
We are using Rancher 2.0.7.
I would like to have the following setup if possible:
DNSMadeEasy.com to handle DNS A Records (DNS to External IP)
Firewall we host (External IP to Static Private IP)
Kubernetes Ingress (Private IP to Cluster Load balanced Ingress)
Load Balanced Ingress (Ingress to Service with multiple instances)
I can figure out the DNS and firewall routing, however I can't figure out how to set a static External IP address on the Ingress Load Balancer.
I can see you can specify a Host name in the Load balancer, however how does this become publicly available?
Could it be because we don;t have an external Load Balancer?
What am I missing on setup of the Ingress/Load balancer?
Thank you in advance, I have spent about two weeks trying to get this to work.
You need to be able to set the Ingress Service to type=LoadBalancer. With on-prem infrastructure, this either requires you to have an external loadbalancer like an F5.
One option to have this working is to use MetalLb

Is it possible to find incoming IP addresses in Google Container Engine cluster?

My nginx access log deployed in a GKE Kubernetes cluster (with type LoadBalancer Kubernetes service) shows internal IPs instead of real visitor IP.
Is there a way to find real IPs anywhere? maybe some log file provided by GKE/Kubernetes?
Right now, the type: LoadBalancer service does a double hop. The external request is balanced among all the cluster's nodes, and then kube-proxy balances amongst the actual service backends.
kube-proxy NATs the request. E.g. a client request from 1.2.3.4 to your external load balancer at 100.99.98.97 gets NATed in the node to 10.128.0.1->10.100.0.123 (node's private IP to pod's cluster IP). So the "src ip" you see in the backend is actually the private IP of the node.
There is a feature planned with a corresponding design proposal for preservation of client IPs of LoadBalancer services.
You could use the real IP module for nginx.
Pass your internal GKE net as a set_real_ip_from directive and you'll see the real client IP in your logs:
set_real_ip_from 192.168.1.0/24;
Typically you would add to the nginx configuration:
The load balancers IP
i.e. the IP that you see in your logs instead of the real client IP currently
The kubernetes network
i.e. the subnet your Pods are in, the "Docker subnet"
Adding of these lines to my nginx.conf HTTP block fixed this issue for me and real visitor IPs started displaying in Stackdriver LogViewer:
http {
...
real_ip_recursive on;
real_ip_header X-Forwarded-For;
set_real_ip_from 127.0.0.1;
set_real_ip_from 192.168.0.0/24;
set_real_ip_from 10.0.0.0/8;
...
}
I'm a happy camper :)

Kubernetes with Google Cloud DNS

Using a Google Container Engine cluster running Kubernetes, what would the process be in order to point http://mydomain.co.uk onto a LoadBalanced ReplicationController?
I'm aware Kubernetes supports SkyDNS - how would I go about delegating Google Cloud DNS for a domain name onto the internal Kubernetes cluster DNS service?
You will need to create a service that maps onto the pods in your replication controller and then expose that service outside of your cluster. You have two options to expose your web service externally:
Set your service to be type: LoadBalancer which will provision a Network load balancer.
Use the ingress support in Kubernetes to create an HTTP(S) load balancer.
The end result of either option is that you will have a public IP address that is routed to the service backed by your replication controller.
Once you have that IP address, you will need to manually configure a DNS record to point your domain name at the IP address.