When using nginx-ingress in Kubernetes (installed via helm), the X-Real-Ip is not my real IP (not preserving the original client IP)
I've tried externalTrafficPolicy: "Local", use-proxy-protocol: "true" as suggested, but it didn't help...
Can you provide us with more info, like the service you're using,
From a quick guess, Looks like you're applying the externalTrafficPolicy: "Local" on the wrong service,
I also previously applied it to my NodePort service instead of the Nginx Service and it didn't work
Please check the service with LoadBalancer type, it's usually named nginx-nginx-ingress-controller, a quick kubectl get services --all-namespaces can show you a list of all the services running.
TLDL
Local means that when the packet arrives to a pod, kube proxy will only distribute the load within the same node pods even though other pods in the same cluster are less loaded.
On the other hand, when setting Cluster value, the balancing takes into account not only the nodes but also the number of pods to distribute the requests, and to avoid imbalance, Kubernetes perform the balancing within the cluster.
https://medium.com/pablo-perez/k8s-externaltrafficpolicy-local-or-cluster-40b259a19404
https://github.com/jetstack/kube-lego/issues/57#issuecomment-277777686
Related
I am currently running a Kubernetes cluster on my own homeserver (in proxmox ct's, was kinda difficult to get working because I am using zfs too, but it runs now), and the setup is as follows:
lb01: haproxy & keepalived
lb02: haproxy & keepalived
etcd01: etcd node 1
etcd02: etcd node 2
etcd03: etcd node 3
master-01: k3s in server mode with a taint for not accepting any jobs
master-02: same as above, just joining with the token from master-01
master-03: same as master-02
worker-01 - worker-03: k3s agents
If I understand it correctly k3s delivers with flannel as a CNI pre-installed, as well as traefik as a Ingress Controller.
I've setup rancher on my cluster as well as longhorn, the volumes are just zfs volumes mounted inside the agents tho, and as they aren't on different hdd's I've set the replicas to 1. I have a friend running the same setup (we set them up together, just yesterday) and we are planing on joining our networks trough vpn tunnels and then providing storage nodes for each other as an offsite backup.
So far I've hopefully got everything correct.
Now to my question: I've both got a static ip #home as well as a domain, and I've set that domain to my static ip
Something like that: (don't know how dns entries are actually written, just from the top of my head for your reference, the entries are working well.)
A example.com. [[my-ip]]
CNAME *.example.com. example.com
I've currently made a port-forward to one of my master nodes for port 80 & 443 but I am not quite sure how you would actually configure that with ha in mind, and my rancher is throwing a 503 after visiting global settings, but I have not changed anything.
So now my question: How would one actually configure the port-forward and, as far as I know k3s has a load-balancer pre-installed, but how would one configure those port-forwards for ha? the one master node it's pointing to could, theoretically, just stop working and then all services would not be reachable anymore from outside.
Assuming your apps are running on port 80 and port 443 your ingress should give you a service with an external ip and you would point your dns at that. Read below for more info.
Seems like you are not a noob! you got a lot going on with your cluster setup. What you are asking is a bit complicated to answer and I will have to make some assumptions about your setup, but will do my best to give you at least some intial info.
This tutorial has a ton of great info and may help you with what you are doing. They use kubeadm instead of k3s, buy you can skip that section if you want and still use k3s.
https://www.debontonline.com/p/kubernetes.html
If you are setting up and installing etcd on your own, you don't need to do that k3s will create an etcd cluster for you that run inside pods on your cluster.
Load Balancing your master nodes
haproxy + keepalived nodes would be configured to point to the ips of your master nodes at port 6443 (TCP), the keepalived will give you a virtual ip and you would configure your kubeconfig (that you get from k3s) to talk to that ip. On your router you will want to reserve an ip (make sure not to assign that to any computers).
This is a good video that explains how to do it with a nodejs server but concepts are the same for your master nodes:
https://www.youtube.com/watch?v=NizRDkTvxZo
Load Balancing your applications running in the cluster
Use an K8s Service read more about it here: https://kubernetes.io/docs/concepts/services-networking/service/
essentially you need an external ip, I prefer to do this with metal lb.
metal lb gives you a service of type load balancer with an external ip
add this flag to k3s when creating initial master node:
https://metallb.universe.tf/configuration/k3s/
configure metallb
https://metallb.universe.tf/configuration/#layer-2-configuration
You will want to reserve more ips on your router and put them under the addresses section in the yaml below. In this example you will see you have 11 ips in the range 192.168.1.240 to 192.168.1.250
create this as a file example metallb-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
kubectl apply -f metallb-cm.yaml
Install with these yaml files:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
source - https://metallb.universe.tf/installation/#installation-by-manifest
ingress
Will need a service of type load balancer, use its external ip as the external ip
kubectl get service -A - look for your ingress service and see if it has an external ip and does not say pending
I will do my best to answer any of your follow up questions. Good Luck!
Basically I have Following Hdfs Cluster setup using docker-compose:
Node 1 with IP: 192.168.1.1 having service deployed as below:
Namenode1:9000
HMaster1: 8300
ZooKeeper1:1291
Node 2 with IP: 192.168.1.2 having service deployed as below:
Namenode2:9000
ZooKeeper2:1291
How does Traefik / Ngnix - (Ingress Controllers) forwards request to two different services having configured with same port number?
There are several great tutorials on how ingress and load balancing works in kubernetes, e.g. this one by Mark Betz. As a general rule, it helps to think in terms of services and workloads instead of specific nodes where your workloads are running on.
A workload deployed in Kubernetes (a so called Pod) has its own internal IP address, called a ClusterIP. That pod can have one or more ports open, just on that pod-owned ip address.
If you now have several pods to distribute the load, e.g. like 5 web server processes or backend logic, it would be hard for a client (inside the cluster) to keep track of all those pod IPs, because they also change when a pod is updated or just restarted due to a crash. This is why Kubernetes has a so called concept of services. Those provide a stable DNS name and IP which then transparently "forwards" to one of the healthy pods. So your client only needs to know the DNS name and not keep track of the specific pod IPs.
If you now want to expose such a service to the public, there are different ways. Either you set your service to type: LoadBalancer which then sets up some load balancer infrastructure on your cloud provider and routes traffic to the nodes and then to the pods - or - you already have an ingress controller in place and just define the routing based on host names and paths. An ingress controller itself is such a loadbalanced service with an attached cloud load balancer and also has some pods (with e.g. a traefik or nginx container) which then route your packets accordingly.
So coming back to your initial question: If you want to expose a service with several pods of the same kind, then you would first create a Service resource that matches your Pods using the selector and then you create one single ingress resource that provides a hostname/path and references this service. The ingress controller will pick up those ingress resources and configure the traefik or nginx accordingly. The ingress controller doesn't really care about the host IPs and port numbers, because it acts on the internal kubernetes ClusterIPs, so you even don't need (and shouldn't) expose such a service directly when you have an ingress in place.
I hope this answers your question regarding exposing two workloads over an ingress controller. For details, check the Kubernetes docs on Ingresses. Based on the services you named (zookeeper, hdfs) load balancing and ingresses might not be what you need for that case. Zookeeper instances should be internal in most cases and need to be adressed individually, so you might want to check out headless services, for this use case. Also check the Kubernetes docs for a way to run zookeeper.
I need for a service in a K8 pod to be able to make HTTP calls to downstream services, load balanced by a NodePort, within the same cluster and namespace.
My constraints are these:
I can do this only through manipulation of deployment and service
entities (no ingress. I don't have that level of access to the
cluster)
I cannot add any K8 plugins
The port that the NodePort exposes must be randomized, not hard coded
This whole thing must be automated. I can't set the deployment with the literal value of
the exposed port. It needs to be set by some sort of variable, or
similar process.
Is this possible, and, if so, how?
It probably can be done but it will not be straight forward and you might have to add some custom automation. A NodePort service is meant to be used by an entity outside your cluster.
For inter-cluster communication, a regular service (with a ClusterIP) will work as designed. Your service can reach another service using DNS service discovery. For example. svc-name.mynamespace.svc.cluster.local would be the DNS entry for a svc-name in the mynamespace namespace.
If you can only do a NodePort which essentially is a port on your K8s nodes, you could create another Deployment or Pod of something like nginx or haproxy. Then have this deployment being serviced by regular K8s service with a ClusterIP. Then have nginx or haproxy point to the NodePort on all your nodes in your Kubernetes cluster. Also, have it configured so that it only forwards to listening NodePorts with some kind of healthcheck.
The above seems like an extra necessary step, but if NodePort from within the cluster is what you need (for some reason), it should do the trick.
Is it possible for an external DNS server to resolve against the K8s cluster DNS? I want to have applications residing outside of the cluster be able to resolve the container DNS names?
It's possible, there's a good article proving the concept: https://blog.heptio.com/configuring-your-linux-host-to-resolve-a-local-kubernetes-clusters-service-urls-a8c7bdb212a7
However, I agree with Dan that exposing via service + ingress/ELB + external-dns is a common way to solve this. And for dev purposes I use https://github.com/txn2/kubefwd which also hacks name resolution.
Although it may be possible to expose coredns and thus forward requests to kubernetes, the typical approach I've taken, in aws, is to use the external-dns controller.
This will sync services and ingresses with provides like aws. It comes with some caveats, but I've used it successfully in prod environments.
coredns will return cluster internal IP addresses that are normally unreachable from outside the cluster. The correct answer is the deleted by MichaelK suggesting to use coredns addon k8s_external https://coredns.io/plugins/k8s_external/ .
k8s_external is already part of coredns. Just edit with
kubectl -n kube-system edit configmap coredns and add k8s_external after kubernetes directive per docs.
kubernetes cluster.local
k8s_external example.org
k8s_gateway also handles dns for ingress resources
https://coredns.io/explugins/k8s_gateway/
https://github.com/ori-edge/k8s_gateway (includes helm chart)
You'll also want something like metallb or rancher/klipper-lb handling services with type: LoadBalancer as k8s_gateway won't resolve NodePort services.
MichaelK is the author of k8s_gateway not sure why his reply is deleted by moderator.
I've never done that, but technically this should be possible by exposing kube-dns service as NodePort. Then you should configure your external DNS server to forward queries for Kube DNS zone "cluster.local" (or any other you have in Kube) to kube-dns address and port.
In Bind that can be done like that:
zone "cluster.local" {
type forward;
forward only;
forwarders{ ANY_NODE_IP port NODEPORT_PORT; };
};
I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc.
Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.
I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.
Is there a way to use GKE without using Google's load balancer?
An Ingress is just a set of rules that tell the cluster how to route to your services, and a Service is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:
ClusterIP - this gives the service an IP that's only available inside the cluster which routes to the pods.
NodePort - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.
LoadBalancer - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.
These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.
The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a Service and instead must specify the port directly in your Deployment. Use the hostPort setting to bind the containers directly to port 80 on the node:
containers:
- name: yourapp
image: yourimage
ports:
- name: http
containerPort: 80
hostPort: 80 ### this will bind to port 80 on the actual node
This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.
If you still want load-balancing then you can run a DaemonSet (so that it's available on every node) with Nginx (or any other proxy) exposed via hostPort and then that will route to your internal services. An easy way to run this is with the standard nginx-ingress package, but skip creating the LoadBalancer service for it and use the hostPort setting. The Helm chart can be configured for this:
https://github.com/helm/charts/tree/master/stable/nginx-ingress
One option is to completely disable this feature on your GKE cluster. When creating the cluster (on console.cloud.google.com) under Add-ons disable HTTP load balancing. If you are using gcloud you can use gcloud beta container clusters create ... --disable-addons=HttpLoadBalancing.
Alternatively, you can also inhibit the GCP Load Balancer by adding an annotation to your Ingress resources, kubernetes.io/ingress.class=somerandomstring.
For newly created ingresses, you can put this in the yaml document:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: somerandomstring
...
If you want to do that for all of your Ingresses you can use this example snippet (be careful!):
kubectl get ingress --all-namespaces \
-o jsonpath='{range .items[*]}{"kubectl annotate ingress -n "}{.metadata.namespace}{" "}{.metadata.name}{" kubernetes.io/ingress.class=somerandomstring\n"}{end}' \
| sh -x
Now using Ingresses is pretty useful with Kubernetes, so I suggest you check out the nginx ingress controller and after deployment, annotate your Ingresses accordingly.
If you specify the Ingress class as an annotation on the Ingress object
kubernetes.io/ingress.class: traefik
Traefik will pick it up while the Google Load Balancer will ignore it. There is also a bit of Traefik documentation on this part.
You could deploy the nginx ingress controller using NodePort mode (e.g. if using the helm chart set controller.service.type to NodePort) and then load-balance amongst your instances using DNS. Just make sure you have static IPs for the nodes or you could even create a DaemonSet that somehow updates your DNS with each node's IP.
Traefik seems to support a similar configuration (e.g. through serviceType in its helm chart).