My institution has firewall settings that block most of the external ports, currently, I have internal Linux virtual machine, for example, http://abc.xyz:5555 (this link can only be accessed in the internal network), and a Netscaler is set up by the admin so that the internal link is forward to a publicly available link: https://def.edu.
Now I have multiple web servers that use ports like 5556,5557,5558. I want to set up Kubernetes ingress that all traffic goes into the ingress controller first, and the ingress will forward traffic to my multiple web services. Typically as the below image shows.
I only have port 5555 available, but all tutorials of Ingress seem only to support HTTP 80 and HTTPS 443 port. My question is, can I set up the Ingress controller host as http://abc.xyz:5555? Or I should go for other approaches, like this said: An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer., if so, what terms/techniques should I use?
I suggest to use an ingress, since each loadbalancer gets an own external ip assigned. You can specify a custom port and protocols (tcp,udp,http). I worked with nginx, but the documentation seemed outdated (last checked last week). So we are currently using Traefik. The web dashboard was also a big help in debugging it.
How we solved it:
Install traefik via helm with custom values, so it listens to other ports besides 80 and 443; Add custom entrypoints in your values.yaml and install traefik with:
helm install --values values.yaml stable/traefik
Install your ingress http/tcp/udp routes
Forward your web dashboard and go to http://localhost:9000/dashboard
Please see the official docs for more detailed steps: https://docs.traefik.io/getting-started/install-traefik/#use-the-helm-chart
Related
I have set up a kubernetes cluster using kubeadm on a server, which is using an ingress controller (nginx) and this is working as intended. However, I used to deploy a nginx reverse proxy when I was using docker and to forward traffic to the containers. I have read that the ingress controller embarks a reverse proxy but I am not sure if it is sufficient and how to configure it (like IP ban when too many requests are sent in 1 s, ...).
I am aware that it can be done by modifying the port of the cluster and forwarding the traffic from the reverse proxy to the ingress controller but I don't know if it has any utility.
If you have more control over your inbound traffic, you can test multiple ingresses, not only Nginx. It will depend on the purpose of your requirement, although Nginx supports rate-limit. I suggest test others ingresses but try to install metal-lb firstly. So you can assign a specific Loadbalancer IP for each ingress.
I am deploying Traefik on my EKS cluster via the default Traefik Helm chart and I am also using the AWS Load Balancer Controller.
Traefik deploys fine and routes traffic to my services. However, one of the customers services has a requirement for the x-forwarded-proto header to passed to it. This is so it knows whether user originally came in via http or https.
The AWS ALB is sending in the header but Traefik doesn't forward it on. Anybody know how to make Traefik do this?
How I install Traefik:
helm install traefik traefik/traefik --values=values.yaml
With traefik, you have to trust external proxies addresses, to preserve their X-Forwarded-For header.
This would be done adding an argument such as --entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,W.X.Y.Z/32
Using Helm, you should be able to use:
helm install .... "--set=additionalArguments=['--entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.42.0.0/16']"`
... or write your own values file.
WARNING: by default the Chart would not use configure hostNetwork, and rather expose your ingress using a LoadBalancer service (actually based on a NodePort).
The NodePort behavior is to NAT the connection entering the SDN. As such, Traefik would see some internal SDN address -- depending on which SDN you are using, it could be the first usable address of an host subnet, the network address of that host subnet, the IP for your kubernetes node out of the SDN, ... You would have to figure out which IP to trust, depending on your setup.
So, instead of explaining the architecture I draw you a picture today :) I know, it's 1/10.
Forgot to paint this as well, it is a single node cluster
Hope this will save you some time.
Probably it's also easier to see where my struggles are, as I expose the lack of understandings.
So, in a nutshell:
What is working:
I can curl each ingress via virtual hosts from inside of the server using curl -vH 'host: host.com' http://192.168.1.240/articleservice/system/ipaddr
I can access the server
What's not working:
I can not access the cluster from outside.
Somehow I am not able to solve this myself, even tho I read quite a lot and had lots of help. As I am having issues with this for a period of time now explicit answers are really appreciated.
Generally you cannot access your cluster from outside without exposing a service.
You should change your "Ingress Controller" service type to NodePort and let kubernetes assign a port to that service.
you can see ports assigned to a service using kubectl get service ServiceName.
now it's possible to access that service from out side on http://ServerIP:NodePort but if you need to use standard HTTP and HTTPS ports you should use a reverse proxy outside of your cluster to flow traffic from port 80 to NodePort assigned to Ingress Controller Service.
If you don't like to add reverse proxy, it is possible to add externalIPs to Ingress controller service but in this way you lose RemoteAddr in your Endpoints and you get ingress controller pod IP instead.
externalIPs can be list of your public IPs
you can find useful information about services and ingress in following links:
Kubernetes Services
Nginx Ingress - Bare-metal considerations
all
I knew well about k8s' nodePort and ClusterIP type in services.
But I am very confused about the Ingress way, because how will a request come into a pod in k8s by this Ingress way?
Suppose K8s master IP is 1.2.3.4, after Ingress setup, and can connect to backend service(e.g, myservice) with a port(e.g, 9000)
Now, How can I visit this myservice:9000 outside? i.e, through 1.2.3.4? As there's no entry port on the 1.2.3.4 machine.
And many docs always said visit this via 'foo.com' configed in the ingress YAML file. But that is really funny, because xxx.com definitely needs DNS, it's not a magic to let you new-invent any xxx.com you like be a real website and can map your xxx.com to your machine!
The key part of the picture is the Ingress Controller. It's an instance of a proxy (could be nginx or haproxy or another ingress type) and runs inside the cluster. It acts as an entrypoint and lets you add more sophisticated routing rules. It reads Ingress Resources that are deployed with apps and which define the routing rules. This allows each app to say what the Ingress Controller needs to do for routing to it.
Because the controller runs inside the cluster, it needs to be exposed to the outside world. You can do this by NodePort but if you're using a cloud provider then it's more common to use LoadBalancer. This gives you an external IP and port that reaches the Ingress controller and you can point DNS entries at that. If you do point DNS at it then you have the option to use routing rules base on DNS (such as using different subdomains for different apps).
The article 'Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?' has some good explanations and diagrams - here's the diagram for Ingress:
I have an HTTP LoadBalancer on Google Kubernetes Engine that is configured with nginx-ingress to serve website traffic. I would now also like to expose a database (PostgreSQL) on port 5432. How do I do that without the cost of a separate LoadBalancer? nginx-ingress seems to only support HTTP traffic.
EDIT:
Actually, never mind; see https://github.com/nginxinc/kubernetes-ingress/blob/c525f568e5b2c5fb234706c67c9a453d4248ee9f/examples/customization/nginx-config.yaml#L35 for how to add a main snippet via NGINX ingress configmap. Lookup how to use NGINX as a TCP proxy and put that snippet there
Configuration snippets available via annotations don't allow you to add a whole server. Meaning the standard ingress controller deployment can't do what you are asking. You'll have to put together a custom deployment yourself to add another server snippet.