I have two frontend applications that access the same API gateway:
example.com and
admin.example.com
I'm using Traefik as my ingress controller. I currently have the following annotation in my k8s ingress configuration:
ingress.kubernetes.io/custom-response-headers: Access-Control-Allow-Origin:https://example.com || Access-Control-Allow-Methods:POST, GET, HEAD, OPTIONS, PUT, DELETE
I also would like to handle https://admin.example.com within the same block. Is there a way that I can setup a conditional expression here dependant on the origin URL that the traffic is originating from? Access-Control-Allow-Origin: * is not acceptable for my usecase (browsers complain).
Related
We have a dozen of services exposed using a ingress-nginx controller in GKE.
In order to route the traffic correctly on the same domain name, we need to use a rewrite-target rule.
The services worked well without any maintenance since their launch in 2019, that is until recently; when cert-manager suddenly stopped renewing the Let's Encrypt certificates, we "resolved" this by temporarily removing the "tls" section from the ingress definition, forcing our clients to use the http version.
After that we removed all traces of cert-manager attempting to set it up from scratch.
Now, the cert-manager is creating the certificate signing request, spawns an acme http solver pod and adds it to the ingress, however upon accessing its url I can see that it returns an empty response, and not the expected token.
This has to do with the rewrite-target annotation that messes up the routing of the acme challenge.
What puzzles me the most, is that this used to work before. (It was set up by a former employee)
Disabling rewrite-target is unfortunately not an option, because it will stop the routing from working correctly.
Using dns01 won't work because our ISP does not support programmatic changes of the DNS records.
Is there a way to make this work without disabling rewrite-target?
P.S.
Here's a number of similar cases reported on Github:
https://github.com/cert-manager/cert-manager/issues/2826
https://github.com/cert-manager/cert-manager/issues/286
https://github.com/cert-manager/cert-manager/issues/487
None of them help.
Here's the definition of my ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: mail#domain.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
Please share the cluster issuer or issue you are using.
ingressClass
If the ingressClass field is specified, cert-manager will create
new Ingress resources in order to route traffic to the
acmesolver pods, which are responsible for responding to ACME
challenge validation requests.
Ref : https://cert-manager.io/v0.12-docs/configuration/acme/http01/#ingressclass
Mostly we don't see the HTTP solver challenge it comes and get removed if DNS or HTTP working fine.
Also, make sure your ingress doesn't have SSL-redirect annotation that could be also once reason behind certs not getting generated.
Did you try checking the other object of cert-manager like order and certificate status request ? kubectl describe challenge are you getting 404 there ?
If you are trying continuously there could be chance you hit rate limit of let's encrypt to request generating certificates.
Troubleshooting : https://cert-manager.io/docs/faq/troubleshooting/#troubleshooting-a-failed-certificate-request
When you configure an Issuer with http01, the default serviceType is NodePort. This means, it won't even go through the ingress controller. From the docs:
By default, type NodePort will be used when you don't set HTTP01 or when you set serviceType to an empty string. Normally there's no need to change this.
I'm not sure how the rest of your setup looks like, but http01 cause the acme server to make HTTP requests (not https). You need to make sure your nginx has listener for http (80). It does follow redirects, so you can listen on http and redirect all traffic to https, this is legit and working.
The cert-manager creates an ingress resource for validation. It directs traffic to the temporary pod. This ingress has it's own set of rules, and you can control it using this setting. You can try and disable or modify the rewrite-targets on this resource.
Another thing I would try is to access this URL from inside the cluster (bypassing the ingress nginx). If it works directly, then it's an ingress / networking problem, otherwise it's something else.
Please share the relevant nginx and cert-manager logs, it might be useful for debugging or understanding where your problem exist.
I have an application running on Kubernetes, that uses nginx as the ingress controller that created a load balancer in AWS. I noticed that by default the application is open to the World, with 0.0.0.0/32 is added to the inbound rules of the AWS Security group that is attached to the load balancer.
I want to allow only certain IPs to access the application. That makes me use nginx.ingress.kubernetes.io/whitelist-source-range annotation in the ingress controller.
But I wouldn't know the IPs of the entities that must be allowed to access the application beforehand. An upstream process (Jenkins job) that creates certain containers, which try to talk to the application that's running on Kube.
How can I dynamically modify the ingress controller annotation to add and remove IPs without causing any downtime? And No, I do not have a common IP range that I can add. I have several different VPCs which have their own CIDR blocks.
Short answer: you don't put the whitelist annotation on the controller, you put it on the ingress resource. And updating that does not require any downtime.
--
Long answer: Yes, by default, the controller loadbalancer is open to the world, and that is expected. All traffic comes into the ingress controller's loadbalancer and the controller then determines how to route it within the cluster.
It determines this routing via the use of ingress resources. Here is an example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 12.0.0.0/8,10.0.0.0/8
name: ingress-whitelist-example
spec:
rules:
- host: somehost.com
http:
paths:
- backend:
service:
name: service-for-somehost
port:
number: 80
tls:
- hosts:
- somehost.com
secretName: tls-secret
The ingress controller will get traffic for (in this example) the domain host somehost.com and it will route the request to the service service-for-somehost in the cluster, on port 80
If a request comes into the controller outside of the ranges 12.0.0.0/8 or 10.0.0.0/8 (as described by the annotation), then the controller will reject the request with a Forbidden error.
The ingress resource is not a resource that needs to be respun or taken down for updates, like Deployments.
The AWS load balancer is external to the cluster. You can, of course, choose to block/whitelist traffic within the AWS cloud before it reaches the cluster and that is fine, but that is not managed within the nginx controller.
Some further reading in Ingresses is available here
In Kubernetes, to enable client-certificate authN, the annotation nginx.ingress.kubernetes.io/auth-tls-verify-client can be used in an ingress. Will client-cert authN work even if I don't do TLS termination in that ingress? For instance, in this ingress, will client-cert authN still work if I remove the tls block from the ingress?
tls:
- hosts:
- mydomain.com
secretName: tls-secret
(More info: I have two ingresses for the same host, one which has a TLS section, and another ingress which has rule for a specific api-path, and has a client-cert section but no TLS section).
Also, if the request is sent on http endpoint (not https) I observed that the client-cert is ignored even if the annotation value is set to on. Is this a documented behavior?
If you define two ingresses as described then a certificate will be required unless you specify auth-tls-verify-client as optional. See the documentation mentioned in the comments.
Also TLS is required if you want to do client certificate authentication. The client certificate is used during the TLS handshake which is why specifying client certificates for one ingress applies to all where the host is the same (eg www.example.com)
Adding the annotation:
annotations:
nginx.ingress.kubernetes.io/auth-url: http://my-auth-service.my-api.svc.cluster.local:8080
...to my ingress rule causes a 500 response from the ingress controller (the ingress works without it).
The service exists and I can ssh into the ingress controller and CURL it, getting a response:
curl http://my-auth-service.my-api.svc.cluster.local:8080 Produces a 200 response.
I checked the ingress controller logs but it says that the service returned a 404. If I can CURL to the same URL why would it return a 404?
2019/07/01 20:26:11 [error] 558#558: *443367 auth request unexpected status: 404 while sending to client, client: 192.168.65.3, server: localhost, request: "GET /mocks HTTP/1.1", host: "localhost"
I'm not sure what to check to deterine the problem.
FWIW, for future readers - I ran into the same problem, and after looking at my auth service logs, noticed nginx ingress' requests were appending a /_external-auth-xxxxxx path to the request url.
Here's where the ingress controller does it, in the source:
https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/template/template.go#L428
And how I'm handling it in my own auth service (a Elixir/Phoenix route):
get "/_external-auth*encoded_nginx_auth_url", TokenController, :index
Here are the options you should check:
Global External Authentication
According to this documentation:
By default the controller redirects all requests to an existing
service that provides authentication if global-auth-url is set in the
NGINX ConfigMap. If you want to disable this behavior for that
ingress, you can use enable-global-auth: "false" in the NGINX
ConfigMap. nginx.ingress.kubernetes.io/enable-global-auth: indicates
if GlobalExternalAuth configuration should be applied or not to this
Ingress rule. Default values is set to "true".
Server Name Indication
Check your proxy_ssl_server_name setting in nginx. It indicates if HTTPS uses SNI or not and it is set to false by default.
Please let me know if that helped.
Hello I tried looking at the auth options in the annotations for kubernetes traefik ingress. I couldn't find anything where I could configure Forward Authentication as documented here: https://docs.traefik.io/configuration/entrypoints/#forward-authentication
I would like to be able to configure forward authentication per ingress resource. This is possible in the nginx ingress controller.
Is that supported currently?
According to the Traefik documentation that feature will be available in version 1.7 of Traefik (currently a release candidate).
Here is a link to the authentication documentation
My guess is that you will need to add the following 2 annotations:
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: https://example.com
and probably also the following annotation with the corresponding header fields your auth service returns as value:
ingress.kubernetes.io/auth-response-headers: X-Auth-User, X-Secret