I have multiple services with working ingress routes using traefik 2.6
All ingress routes work as expected using annotations and I get no errors showing when applying the configuration with args regarding file provider to "dynamic.yml" After checking in the pod itself, traefik is running with the correct arguments and that the dynamic.conf file and cert.pem are mounted correctly.
#dynamic.yml
tls:
options:
default:
clientAuth:
caFiles:
- /opt/traefik/cert.pem
clientAuthType: RequireAndVerifyClientCert
The configurations above applies the tls options to all ingress routes.
When applying the following ingress annotations for the service, clients do not get prompted for certs:
Changing the configuration to the following:
#dynamic.yml
tls:
options:
mtls:
clientAuth:
caFiles:
- /opt/traefik/cert.pem
clientAuthType: RequireAndVerifyClientCert
...
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/tls.options: mtls
...
The ingress routes function, however, clients are able to view the site without certificate authentication on the specific ingress route with tls.option "mtls".
Found the answer here:
https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/#annotations
traefik.ingress.kubernetes.io/router.tls.options: foobar#file
Related
Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "falseā --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.
I configure Ingress on google Kubernetes engine. I am new on ingress but as i understood ingress can serve different Loadbalancers and different LBs should be differently configured.
I have started with a simple ingress config on GKE :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: web-np
servicePort: 8080
- path: /v2/keys
backend:
serviceName: etcd-np
servicePort: 2379
And it works fine so I have 2 different NodePort services web-np and etcd-np . But now I need to extend this logic with some rewrite rules so that request that points to /service1 - will be redirected to the other service1-np service but before /service1/hello.html must be replaced to /hello.html. That's why I have the following questions:
How can I configure rewrite in ingress and if it is possible with default load balancer.
What is default load balancer on GKE.
Where can I find a list of all annotations to it. I have thought that the full list is on https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/ but there is a completly different list and there is no kubernetes.io/ingress.global-static-ip-name annotation that is widely used in google examples.
Ingress - API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
Kubernetes.io: Ingress
Kubernetes can have multiple Ingress controllers. This controllers are different from each other. The Ingress controllers mentioned by you in this particular question are:
Ingress-GCE - a default Ingress resource for GKE cluster:
Github.com: Kubernetes: Ingress GCE
Ingress-nginx - an alternative Ingress controller which can be deployed to your GKE cluster:
Github.com: Kubernetes: Ingress-nginx
Ingress configuration you pasted will use the Ingress-GCE controller. If you want to switch to Ingress-nginx one, you will need to deploy it and set an annotation like:
kubernetes.io/ingress.class: "nginx"
How can I configure rewrite in ingress and if it is possible with default load balancer.
There is an ongoing feature request to support rewrites with Ingress-GCE here: Github.com: Ingress-GCE: Rewrite.
You can use Ingress-nginx to have support for rewrites. There is an official documentation about deploying it: Kubernetes.github.io: Ingress-nginx: Deploy
For more resources about rewrites you can use:
Kubernetes.github.io: Ingress nginx: Examples: Rewrite
Stackoverflow.com: Ingress nginx how to serve assests to application - this is an answer which shows an example on how to configure a playground for experimenting with rewrites
What is default load balancer on GKE.
If you create an Ingress resource with a default Ingress-GCE option you will create a L7 HTTP&HTTPS LoadBalancer.
If you create a service of type LoadBalancer in GKE you will create an L4 Network Load Balancer
If you deploy an Ingress-nginx controller in GKE cluster you will create a L4 Network Loadbalancer pointing to the Ingress-nginx controller which after that will route the traffic accordingly to your Ingress definition. If you are willing to use Ingress-nginx you will need to specify:
kubernetes.io/ingress.class: "nginx"
in your Ingress definition.
Please take a look on this article: Medium.com: Google Cloud: Kubernetes Nodeport vs Loadbalancer vs Ingress
Where can I find a list of all annotations to it. I have thought that the full list is on https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/ but there is a completly different list and there is no kubernetes.io/ingress.global-static-ip-name annotation that is widely used in google examples.
The link that you provided with annotations is specifically for Ingress-nginx. This annotations will not work with Ingress-GCE.
The annotations used in GCP examples are specific to Ingress-GCE.
You can create a Feature Request for a list of available annotations for Ingress-GCE on Issuetracker.google.com.
Answering an old question, but hopefully it can help someone.
I found the list of annotations for GCP Ingress in the source code for ingress-gce.
I'm trying to configure nginx-ingress for mutual TLS but only for specific remote address. I tried to use snippet but no success:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($remote_addr = 104.214.x.x) {
auth-tls-verify-client: on;
auth-tls-secret: namespace/nginx-ca-secret;
auth-tls-verify-depth: 1;
auth-tls-pass-certificate-to-upstream: false;
}
The auth-tls annotations work when applied as annotations, but inside the snippet they don't.
Any idea how to configure this or maybe a workaround to make it work?
The job of mTLS is basically restricting access to a service by requiring the client to present a certificate. If you expose a service and then require only clients with specific IP addresses to present a certificate, the entire rest of the world can still access your service without a certificate, which completely defeats the point of mTLS.
If you want more info, here is a good article that explains why TLS and mTLS exist and what is the difference between them.
There are two ways to make a sensible setup out of this:
Just use regular TLS instead of mTLS
Make a service in your cluster require mTLS to access it regardless of IP addresses
If you go for option 2, you need to configure the service itself to use mTLS, and then configure ingress to pass through the client certificate to the service. Here's a sample configuration for nginx ingress that will work with a service that expects mTLS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mtls-sample
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: mtls-svc
servicePort: 443
I'm confused about nginx ingress with Kubernetes. I've been able to use it with "basic nginx auth" (unable to do so with oauth2 yet).
I've installed via helm:
helm install stable/nginx-ingress --name app-name --set rbac.create=true
This creates two services, an nginx-ingress-controller and an nginx-ingress-backend.
When I create an ingress, this ingress is targeted towards one and only one nginx-ingress-controller, but I have no idea how:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tomcat
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
nginx.ingress.kubernetes.io/rewrite-target: /
namespace: kube-system
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: tomcat-deployment-service
servicePort: 8080
When I get this Ingress from the output of kubectl get ingress -n kube-system, it has a public, external IP.
What's concerning is that basic-auth DOESN'T APPLY to that external IP; it's wide open! Nginx authentication only kicks in when I try to visit the nginx-ingress-controller's IP.
I have a lot of questions.
How do I made an ingress created from kubectl apply -f
ingress.yaml target a specific nginx-ingress-controller?
How do I keep this new ingress from having an external IP?
Why isn't nginx authentication kicking in?
What IP am I suppose to use (the nginx-ingress-controller or the
generated one?)
If I'm suppose to use the generated IP, what about the one from the controller?
I have been searching for descent, working examples (and pouring over sparse, changing documentation, and github issues) for literally days.
EDIT:
In this "official" documentation, it's unclear as to weather or not http://10.2.29.4/ is the IP from the ingress or the controller. I assume the controller because when I run this, the other doesn't even authenticate (it let's me in without asking for a password). Both IP's I'm using are external IPs (publicly available) on GCP.
I think you might have some concept definition misunderstanding.
Ingress is not a job ( Nor a service, nor a pod ). It is just a configuration. It cannot have "IP". think of ingress as a routing rule or a routing table in your cluster.
Nginx-ingress-controller is the service with type Loadbalancer with actual running pods behind it that facilitates those ingress rules that you created for your cluster.
Nginx-ingress-backend is likely to be a default-backend that your nginx-ingress-controller will route to if no matching routes are found. see this
In general, your nginx-ingress-controller should be the only entry of your cluster. Other services in your cluster should have type ClusterIP such that they are not exposed to outside the cluster and only accessible through your nginx-ingress-controller. In you case, since your service could be access from outside directly, it should not be of type ClusterIP. Just change the service type to get it protected.
Based on above understanding, I will glad to provide further help for the question you have.
Some readings:
What is ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
K8s Services and external accessibility: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.