traefik 1.7.11 subdomain based access rules setup - kubernetes

I want to create IP based subdomain access rules for traefik (1.7.11) ingress controller running on Kubernetes (EKS). All IP's are allowed to talk to an external/frontend entry point
traefik.toml: |
defaultEntryPoints = ["http","https"]
logLevel = "INFO"
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.http.whiteList]
sourceRange = ["0.0.0.0/0"]
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[entryPoints.https.whiteList]
sourceRange = ["0.0.0.0/0"]
But we have only prod environments running in this cluster.
Want to limit certain endpoints like monitoring.domain.com accessible from limited IP's (Office location) and keep *.domain.com (default) accessible from the public internet.
anyway I can do it in traefik ?

You can try using the traefik.ingress.kubernetes.io/whitelist-source-range: "x.x.x.x/x, xxxx::/x" Traefik annotation on you Ingress object. You can also have 4 Ingress objects. One for each stage.domain.com, qa.domain.com, dev.domain.com and prod.domain.com.
For anything other than prod.domain.com you can add a whitelist.
Another option is to change your traefik.toml with [entryPoints.http.whitelist] but you may have to have different ingress controllers with a different ingress class for each environment.

Related

How to assign a static ip address to an Kubernetes ingress using Terraform?

I've been using a kubernetes ingress config file to assign a static external ip address created by GCP.
The ingress and the deployment are managed by GKE.
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: test-address
spec:
backend:
serviceName: test-service
servicePort: 80
With this yaml file, the static ip address created already is successfully attached to the ingress.
On External IP Address on VPC Network menu, the ip is in use by forwarding rule.
Name External Address Region Type Version In use by
test-address 12.34.56.78 asia-northeast2 Static IPv4 Forwarding rule k8s2-ab-blablablabla
However, Recently I tried to test Terraform to deploy the infrastructure to GCP and I made a Terraform config file exactly the same with above ingress.yaml.
ingress.tf
resource "kubernetes_ingress" "test_ingress" {
metadata {
name = "test-ingress"
annotations = {
"kubernetes.io/ingress.global-static-ip-name" = "test-address"
}
}
spec {
backend {
service_name = test-service
service_port = "80"
}
}
}
After I apply this config to GCP, the ingress was created successfully but the ip address does not attach to the ingress.
In Ingress detail in GCP, an error occurred with the message
Error syncing to GCP: error running load balancer syncing routine: loadbalancer blablablablabla does not exist: the given static IP name test-address doesn't translate to an existing static IP.
And on External IP Address on VPC Network menu, the IP address row at In use by shows None.
What is the problem here? Did I miss something with Terraform?
As #MattBrowne said in the comments, needs to be global IP and not regional. This also fixed for me.

How to use Request header as a pathvariable of entryPoint address for Traefik ingress controller

I use traefic 1.7 for services authentication via Keycloak in kubernate. (I already have Bearer token and need just to validate it via Keycloak)
My ingress controller looks like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: kube-system
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.my-api]
address = ":9999"
[entryPoints.my-api.auth.forward]
address = "https://keycloak-host/auth/realms/R-1/protocol/openid-connect/userinfo"
trustForwardHeader = true
[kubernetes]
namespaces = ["n1", "n2","n3","n4"]
[respondingTimeouts]
readTimeout = "120s"
writeTimeout = "5s"
idleTimeout = "360s"
The problem is that I have different realms for different organisation in Keycoak. In a request header I have Org-Id and I need to place instead of R-1:
address = "https://keycloak-host/auth/realms/R-${Org-Id}/protocol/openid-connect/userinfo"
Is there a way to extract the header from request and place it to the address path?

Traefik HTTP - HTTPS redirecting behind AWS ELB (TCP)

I have a Kubernetes setup where Traefik is my ingress controller. Traefik is behind an AWS ELB which is listening on an SSL port (TCP:443) so that it can terminate the SSL using an ACM certificate. It then load balances you to traefik (in k8s) which listens on TCP:80. We require this set up as we whitelist on a per-ingress basis in traefik and use the proxy protocol header to do this (we tried using x-fowarded-for whitelisting on http load balancer but this was easy to bypass).
This is working for HTTPS traffic coming in but I would like to set up http redirection to https. So far I have set up a TCP:80 listener on the load balancer forwarding to TCP:81. I've also set up my Traefik entrypoints using a configuration file:
defaultEntryPoints = ["http"]
debug = false
logLevel = "INFO"
# Do not verify backend certificates (use https backends)
InsecureSkipVerify = true
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.proxyProtocol]
insecure = true
trustedIPs = ["10.0.0.0/8"]
[entryPoints.redirect]
address = ":81"
compress = true
[entryPoints.http.redirect]
entryPoint = "http"
However this gives a
400 Bad Request
when I try and access any service on :80.
I assume this is because for this method to work traefik itself needs to have an SSL listener, rather than the ELB.
Is there a way this can be set up so that all traffic that hits traefik on :81 is rewritten to https?

Multiple acme sections for multiple customers with single traefik ingress controller in kubernetes

Situation:
I want many customers share a common set of public IPs to access the kubernetes cluster.
Hostname based routing within the cluster it's done. But I want to provide HTTPS for all my customer's domains.
I have a set of edge-router nodes with one public IP each one. There's a Traefik ingress controller configured as DaemonSet listening on these nodes.
Let's supose there can be thousands customers with thousands domains.
My problem is that I want to have mulitple acme sections.
Exctracted from a ConfigMap in my ingress controller manifest:
[acme]
email = "ca#mycompany.com"
storage = "/etc/traefik/acme.json"
entryPoint = "https"
onHostRule = true
caServer = "https://acme-v02.api.letsencrypt.org/directory"
[[acme.domains]]
main = "mycustomer1.com"
[acme.httpChallenge]
entryPoint = "http"
My ideal solution would be have a way to split each customer https configuration in separate files, each one with its own acme settings.
Or, even better, having a way of configure this from the ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: garden
annotations:
kubernetes.io/ingress.class: traefik
#
# LET'S ENCRYPT CONFIGURATION COULD BE HERE.
# THAT WAY IT WOULD BE EASY TO CONFIGURE HTTPS FOR EACH CUSTOMER.
#
spec:
rules:
- host: mycustomer1.com
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80
Is there any way to achieve this?
I would suggest trying to create multiple kind: Ingress for each customer and manage them. You will have the possibility to use special configmap for each Ingress class

Whitelist an IP to access deployment with Kubernetes ingress Istio

I'm trying to whitelist an IP to access a deployment inside my Kubernetes cluster.
I looked for some documentation online about this, but I only found the
ingress.kubernetes.io/whitelist-source-range
for ingress to grant access to certain IP range. But still, I couldn't manage to isolate the deployment.
Here is the ingress configuration YAML file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-internal
annotations:
kubernetes.io/ingress.class: "istio"
ingress.kubernetes.io/whitelist-source-range: "xxx.xx.xx.0/24, xx.xxx.xx.0/24"
spec:
rules:
- host: white.example.com
http:
paths:
- backend:
serviceName: white
servicePort: 80
I can access the deployment from my whitelisted IP and from the mobile phone (different IP not whitelisted in the config)
Has anyone stepped in the same problem using ingress and Istio?
Any help, hint, docs or alternative configuration will be much appreciated.
Have a look at the annotation overview, it seems that whitelist-source-range is not supported by istio:
whitelist-source-range: Comma-separate list of IP addresses to enable access to.
nginx, haproxy, trafficserver
I managed to solve whitelisting ip address problem for my istio-based service (app that uses istio proxy and exposed through the istio ingress gateway via public LB) using NetworkPolicy.
For my case, here is the topology:
Public Load Balancer (in GKE, using preserve clientIP mode) ==> A dedicated Istio Gateway Controller Pods (see my answer here) ==> My Pods (istio-proxy sidecar container, my main container).
So, I set up 2 network policy:
NetworkPolicy that guards the incoming connection from internet connection to my Istio Ingress Gateway Controller Pods. In my network policy configuration, I just have to set the spec.podSelector.matchLabels field to the pod label of Dedicated Istio Ingress Gateway Controller Pods's
Another NetworkPolicy that limits the incoming connection to my Deployment -> only from the Istio Ingress Gateway Controller pods/deployments.