nginx ingress controller on Google Kubernetes Engine firewall rules - kubernetes

I have installed the nginx ingress controller on GKE from
https://github.com/kubernetes/ingress-nginx via helm chart
it does create the controller and also an firewall rule. The rule is open for all public ips.
Is there a way to restrict this via the helm chart ?
if not any way go get the auto generated firewall rule via terraform and adjust ?
the name of the firewall rule looks like this: k8s-fw-a8301409696934895b9facd9232892dc
Thanks

nginx ingress creates a loadBalancer service to expose it on GKE. You can define the spec.loadBalancerSourceRanges field in the service definition with the IPs you would like to allow access, all other IPs will be filtered. The default value for this field is 0.0.0.0 and the GCE firewall rules are created based on this field.
Note that you can also leverage the Nginx ingress controller to limit which IPs can connect, however, this still allows alls traffic to reach the node.

Related

How do I make Traefik pass on the x-forwarded-proto header?

I am deploying Traefik on my EKS cluster via the default Traefik Helm chart and I am also using the AWS Load Balancer Controller.
Traefik deploys fine and routes traffic to my services. However, one of the customers services has a requirement for the x-forwarded-proto header to passed to it. This is so it knows whether user originally came in via http or https.
The AWS ALB is sending in the header but Traefik doesn't forward it on. Anybody know how to make Traefik do this?
How I install Traefik:
helm install traefik traefik/traefik --values=values.yaml
With traefik, you have to trust external proxies addresses, to preserve their X-Forwarded-For header.
This would be done adding an argument such as --entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,W.X.Y.Z/32
Using Helm, you should be able to use:
helm install .... "--set=additionalArguments=['--entryPoints.websecure.forwardedHeaders.trustedIPs=127.0.0.1/32,10.42.0.0/16']"`
... or write your own values file.
WARNING: by default the Chart would not use configure hostNetwork, and rather expose your ingress using a LoadBalancer service (actually based on a NodePort).
The NodePort behavior is to NAT the connection entering the SDN. As such, Traefik would see some internal SDN address -- depending on which SDN you are using, it could be the first usable address of an host subnet, the network address of that host subnet, the IP for your kubernetes node out of the SDN, ... You would have to figure out which IP to trust, depending on your setup.

GKE: ingres with sub-domain

I used to work with Openshift/OKD cluster deployed in AWS and there it was possible to connect cluster to some domain name from Route53. Then as soon as I was deploying ingress with some hosts mappings (and the hosts defined in ingres were subdomains of the basis domain) all necessary lb rules (Routes in Openshift) and subdomain itself were created by Openshift and were directly available. For example: Openshift is connected to domain "somedomain.com" which is registered in Route53. In ingress I have the host mapping like:
hosts:
- host: sub1.somedomain.com
paths:
- path
After deployment I can reach sub1.somedomain.com. Is this kind of functionality available in GKE?
So far I have seen only mapping to static IP.
Also I red here https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 that if I need to connect service with ingress, the service have to be of type NodePort. Is it realy so? In Openshift it was not required any normal ClusterIP service could be connected to ingress.
Thanks in advance!
I think you should consider the other Ingress Controllers for your use cases.
I'm not an expert of the GKE, but as I can see Best practices for enterprise multi-tenancy as follows,
you need to consider how to route the multiple Ingress hostnames through wildcard subdomain like the OpenShift additionally.
Set up HTTP(S) Load Balancing with Ingress
:
You can create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress resource,
which defines how traffic reaches your Services and how the traffic is routed to your tenant's application.
By registering Services with the Ingress resource, the Services' naming convention becomes consistent,
showing a single ingress, such as tenanta.example.com and tenantb.example.com.
The routing feature depends on the Ingress Controllers basically.
In my finding, the default Ingress Controllers of the GKE just creates a Google Cloud HTTP(S) Load Balancer, but it does not consider multi-tenancy by default like the OpenShift.
In contrast, in the OpenShift, the Ingress Controller was implemented using HAProxy with dynamic configuration feature as follows.
LB -tenanta.example.com--> HAProxy(directly forward the tenanta.example.com traffic to the target pod IPs) ---> Target Pods
The type of service exposition depends on the K8S implementation on each cloud provider.
If the ingress controller is a component inside your cluster, a ClusterIP is enough to have your service reachable (internally from inside the cluster itself)
If the ingress definition configure an external element (in case of GKE, a load balancer), this element isn't a part of the cluster and can't know the ClusterIP (because it is only accessible internally). A node port is required in this case.
So, in your case, either you expose your service in NodePort, or you configure GKE with another Ingress controller, locally installed in the cluster, instead of using this one by default.
So far GKE does not provide the possibility to dynamically create subdomains. The wished situation would be if GKE cluster can be set some DNS zone managed in GCP and there is a mimik of OpenShift Routes using for example ingress annotations.
But the reality tight now - you have to create subdomain or domain youself as well as IP address wich you connect this domain to. And this particular GCP IP address (using name) can be connected to ingress using annotations. Or it can be used in loadbalancer service.

AWS EKS - Load Balancer (Ingress)

I am deploying kubernetes nginx-ingress on AWS. Is there any way to prevent auto creation of network loadbalancer and me assigning an already existing load balancer in the config?
If not, is there any way to provide custom name to AWS NLB from within the nginx ingress configuration?
No, what you're asking for is not supported. There's no way to configure nginx ingress controller to create NLBs with specific names or use existing NLBs.
You can however do this manually if you set the nginx ingress controller serviceType to NodePort and then manually register the targets into the NLB (via the console, CLI etc).
Note that it's not ideal to have things configured outside of Kubernetes in this way because there is a tendency to forget that ingress changes aren't synced to the NLB.
Make sure to check the security groups of your nodes to allow traffic from the load balancer to the exposed ports on your nodes.

How would I setup kuberentes ingress to for VPN-only access?

I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.
What's the best way to setup ingress for a service that will be restricted to only people on the VPN?
Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.
It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.
Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what #coderanger said.
Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".
Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers
You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.

Customizing vhost config with nginx/traefik ingress controllers

I have a service that serves several /locations. I would like to make a single location /how/very/special reachable by any IP while keeping every other /location accessible only to a list of trusted IPs (which is trivial to do when you can
What is the best-practice way to achieve this via traefik or ingress controllers? Is a sidecar nginx the only way to add this logic?
You can achieve that by using Nginx ingress controller in Kubernetes as standing in documentation https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#user-content-whitelist-source-range
You can specify allowed client IP source ranges through the
nginx.ingress.kubernetes.io/whitelist-source-range annotation. The
value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
To configure this setting globally for all Ingress rules, the
whitelist-source-range value may be set in the NGINX ConfigMap.
!!! note Adding an annotation to an Ingress rule overrides any global
restriction.
So as default you should put your trusted IP CIDRS in Nginx ConfigMap and override that rule only for /how/very/special by setting CIDR to 0.0.0.0/0