I have a service that serves several /locations. I would like to make a single location /how/very/special reachable by any IP while keeping every other /location accessible only to a list of trusted IPs (which is trivial to do when you can
What is the best-practice way to achieve this via traefik or ingress controllers? Is a sidecar nginx the only way to add this logic?
You can achieve that by using Nginx ingress controller in Kubernetes as standing in documentation https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#user-content-whitelist-source-range
You can specify allowed client IP source ranges through the
nginx.ingress.kubernetes.io/whitelist-source-range annotation. The
value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
To configure this setting globally for all Ingress rules, the
whitelist-source-range value may be set in the NGINX ConfigMap.
!!! note Adding an annotation to an Ingress rule overrides any global
restriction.
So as default you should put your trusted IP CIDRS in Nginx ConfigMap and override that rule only for /how/very/special by setting CIDR to 0.0.0.0/0
Related
I have installed nginx ingress in kubernetes from official documenation. But while configuring the rules without mentioning the "host". I am getting the below erros.
error
++++++
spec.rules[0].host: Required value
Is it possible to configure it without host as I want to access it using only IP address
and I also found the below deployment file with which I am able to apply rules without "host". But not sure is this is safe to use. Please guide me here
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
Do you mean to configure the ingress? The ingress controller is different from ingress itself. If you are configuring ingress, then host is completely optional. If host is omitted, all the http traffic is directed through IP address by default. Refer to this documentation for more info https://kubernetes.io/docs/concepts/services-networking/ingress/
I use pathType: ImplementationSpecific for many routes in an ingress.
The final nginx-ingress-controller configs for two clusters:
location ~* /some/route/(?!one|two|three).{1,} # one cluster
location /some/route/(?!one|two|three).{1,} # other cluster
The second one is wrong because it is a regex route but ~* is missing.
The nginx-ingress-controller versions are matching in both environments.
The use-regex annotation is NOT used in any of the environments.
From the docs I read that ImplementationSpecific depends on the ingress class and I am not sure what that means.
I didn't find any configuration that could explain this behaviour and difference between the configs.
Why is nginx-ingress-controller config different in different clusters?
The nginx-ingress-controller config depends on the cluster.
When running NGINX Ingress Controller, you have the following options with regards to which configuration resources it handles:
Cluster-wide Ingress Controller (default). The Ingress Controller handles configuration resources created in any namespace of the cluster. As NGINX is a high-performance load balancer capable of serving many applications at the same time, this option is used by default in our installation manifests and Helm chart.
Single-namespace Ingress Controller. You can configure the Ingress Controller to handle configuration resources only from a particular namespace, which is controlled through the -watch-namespace command-line argument. This can be useful if you want to use different NGINX Ingress Controllers for different applications, both in terms of isolation and/or operation.
Ingress Controller for Specific Ingress Class. This option works in conjunction with either of the options above. You can further customize which configuration resources are handled by the Ingress Controller by configuring the class of the Ingress Controller and using that class in your configuration resources. See the section Configuring Ingress Class.
For more information refer to this document.
Some use cases for this might be:
An Ingress Controller that is behind an internal ELB for traffic between services within the VPC (or a group of peered VPCs)
An Ingress Controller behind an ELB that already terminates SSL
An Ingress Controller with different functionality or performance
Most NGINX configuration options have NGINX-wide defaults. They can
also be overridden on a per-Ingress resource level.
For more information refer to this document.
I am deploying Traefik on my EKS cluster via the default Traefik Helm chart and I am also using the AWS Load Balancer Controller.
Traefik deploys fine and routes traffic to my services. However, one of the customers services has a requirement for the x-forwarded-proto header to passed to it. This is so it knows whether user originally came in via http or https.
The AWS ALB is sending in the header but Traefik doesn't forward it on. Anybody know how to make Traefik do this?
How I install Traefik:
helm install traefik traefik/traefik --values=values.yaml
With traefik, you have to trust external proxies addresses, to preserve their X-Forwarded-For header.
This would be done adding an argument such as --entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,W.X.Y.Z/32
Using Helm, you should be able to use:
helm install .... "--set=additionalArguments=['--entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.42.0.0/16']"`
... or write your own values file.
WARNING: by default the Chart would not use configure hostNetwork, and rather expose your ingress using a LoadBalancer service (actually based on a NodePort).
The NodePort behavior is to NAT the connection entering the SDN. As such, Traefik would see some internal SDN address -- depending on which SDN you are using, it could be the first usable address of an host subnet, the network address of that host subnet, the IP for your kubernetes node out of the SDN, ... You would have to figure out which IP to trust, depending on your setup.
I have installed the nginx ingress controller on GKE from
https://github.com/kubernetes/ingress-nginx via helm chart
it does create the controller and also an firewall rule. The rule is open for all public ips.
Is there a way to restrict this via the helm chart ?
if not any way go get the auto generated firewall rule via terraform and adjust ?
the name of the firewall rule looks like this: k8s-fw-a8301409696934895b9facd9232892dc
Thanks
nginx ingress creates a loadBalancer service to expose it on GKE. You can define the spec.loadBalancerSourceRanges field in the service definition with the IPs you would like to allow access, all other IPs will be filtered. The default value for this field is 0.0.0.0 and the GCE firewall rules are created based on this field.
Note that you can also leverage the Nginx ingress controller to limit which IPs can connect, however, this still allows alls traffic to reach the node.
all
I knew well about k8s' nodePort and ClusterIP type in services.
But I am very confused about the Ingress way, because how will a request come into a pod in k8s by this Ingress way?
Suppose K8s master IP is 1.2.3.4, after Ingress setup, and can connect to backend service(e.g, myservice) with a port(e.g, 9000)
Now, How can I visit this myservice:9000 outside? i.e, through 1.2.3.4? As there's no entry port on the 1.2.3.4 machine.
And many docs always said visit this via 'foo.com' configed in the ingress YAML file. But that is really funny, because xxx.com definitely needs DNS, it's not a magic to let you new-invent any xxx.com you like be a real website and can map your xxx.com to your machine!
The key part of the picture is the Ingress Controller. It's an instance of a proxy (could be nginx or haproxy or another ingress type) and runs inside the cluster. It acts as an entrypoint and lets you add more sophisticated routing rules. It reads Ingress Resources that are deployed with apps and which define the routing rules. This allows each app to say what the Ingress Controller needs to do for routing to it.
Because the controller runs inside the cluster, it needs to be exposed to the outside world. You can do this by NodePort but if you're using a cloud provider then it's more common to use LoadBalancer. This gives you an external IP and port that reaches the Ingress controller and you can point DNS entries at that. If you do point DNS at it then you have the option to use routing rules base on DNS (such as using different subdomains for different apps).
The article 'Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?' has some good explanations and diagrams - here's the diagram for Ingress: