In Kubernetes, to enable client-certificate authN, the annotation nginx.ingress.kubernetes.io/auth-tls-verify-client can be used in an ingress. Will client-cert authN work even if I don't do TLS termination in that ingress? For instance, in this ingress, will client-cert authN still work if I remove the tls block from the ingress?
tls:
- hosts:
- mydomain.com
secretName: tls-secret
(More info: I have two ingresses for the same host, one which has a TLS section, and another ingress which has rule for a specific api-path, and has a client-cert section but no TLS section).
Also, if the request is sent on http endpoint (not https) I observed that the client-cert is ignored even if the annotation value is set to on. Is this a documented behavior?
If you define two ingresses as described then a certificate will be required unless you specify auth-tls-verify-client as optional. See the documentation mentioned in the comments.
Also TLS is required if you want to do client certificate authentication. The client certificate is used during the TLS handshake which is why specifying client certificates for one ingress applies to all where the host is the same (eg www.example.com)
Related
We have a dozen of services exposed using a ingress-nginx controller in GKE.
In order to route the traffic correctly on the same domain name, we need to use a rewrite-target rule.
The services worked well without any maintenance since their launch in 2019, that is until recently; when cert-manager suddenly stopped renewing the Let's Encrypt certificates, we "resolved" this by temporarily removing the "tls" section from the ingress definition, forcing our clients to use the http version.
After that we removed all traces of cert-manager attempting to set it up from scratch.
Now, the cert-manager is creating the certificate signing request, spawns an acme http solver pod and adds it to the ingress, however upon accessing its url I can see that it returns an empty response, and not the expected token.
This has to do with the rewrite-target annotation that messes up the routing of the acme challenge.
What puzzles me the most, is that this used to work before. (It was set up by a former employee)
Disabling rewrite-target is unfortunately not an option, because it will stop the routing from working correctly.
Using dns01 won't work because our ISP does not support programmatic changes of the DNS records.
Is there a way to make this work without disabling rewrite-target?
P.S.
Here's a number of similar cases reported on Github:
https://github.com/cert-manager/cert-manager/issues/2826
https://github.com/cert-manager/cert-manager/issues/286
https://github.com/cert-manager/cert-manager/issues/487
None of them help.
Here's the definition of my ClusterIssuer
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: mail#domain.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
Please share the cluster issuer or issue you are using.
ingressClass
If the ingressClass field is specified, cert-manager will create
new Ingress resources in order to route traffic to the
acmesolver pods, which are responsible for responding to ACME
challenge validation requests.
Ref : https://cert-manager.io/v0.12-docs/configuration/acme/http01/#ingressclass
Mostly we don't see the HTTP solver challenge it comes and get removed if DNS or HTTP working fine.
Also, make sure your ingress doesn't have SSL-redirect annotation that could be also once reason behind certs not getting generated.
Did you try checking the other object of cert-manager like order and certificate status request ? kubectl describe challenge are you getting 404 there ?
If you are trying continuously there could be chance you hit rate limit of let's encrypt to request generating certificates.
Troubleshooting : https://cert-manager.io/docs/faq/troubleshooting/#troubleshooting-a-failed-certificate-request
When you configure an Issuer with http01, the default serviceType is NodePort. This means, it won't even go through the ingress controller. From the docs:
By default, type NodePort will be used when you don't set HTTP01 or when you set serviceType to an empty string. Normally there's no need to change this.
I'm not sure how the rest of your setup looks like, but http01 cause the acme server to make HTTP requests (not https). You need to make sure your nginx has listener for http (80). It does follow redirects, so you can listen on http and redirect all traffic to https, this is legit and working.
The cert-manager creates an ingress resource for validation. It directs traffic to the temporary pod. This ingress has it's own set of rules, and you can control it using this setting. You can try and disable or modify the rewrite-targets on this resource.
Another thing I would try is to access this URL from inside the cluster (bypassing the ingress nginx). If it works directly, then it's an ingress / networking problem, otherwise it's something else.
Please share the relevant nginx and cert-manager logs, it might be useful for debugging or understanding where your problem exist.
Following
Installing nginx ingress in AKS cluster fails with SyncLoadBalancerFailed error
I tried to add TLS to the ingress. I followed
https://learn.microsoft.com/en-us/azure/aks/ingress-own-tls
Everything works as per the documentation, including the curl tests at the end.
My problem is that I expect to be able to browse the application at https://EXTERNAL_IP.
Instead I get
If I try http I get
Also note that if I remove from the Ingress the tls related entries
tls:
- hosts:
- demo.azure.com
secretName: aks-ingress-tls
rules:
- host: demo.azure.com
http access works fine
HTTPS wont work with https://EXTERNAL_IP. Certificates only work with domain names, in your case it would be https://demo.azure.com.
You also need to trust the certificate authority which created this Fake Certificate with your client or browser.
Here you can read about TLS/SSL Certificates.
I have a sample application (web-app, backend-1, backend-2) deployed on minikube all under a JWT policy, and they all have proper destination rules, Istio sidecar and MTLS enabled in order to secure the east-west traffic.
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: oidc
spec:
targets:
- name: web-app
- name: backend-1
- name: backend-2
peers:
- mtls: {}
origins:
- jwt:
issuer: "http://myurl/auth/realms/test"
jwksUri: "http://myurl/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
When I run the following command I receive a 401 unauthorized response when requesting the data from the backend, which is due to $TOKEN not being forwarded to backend-1 and backend-2 headers during the http request.
$> curl http://minikubeip/api "Authorization: Bearer $TOKEN"
Is there a way to forward http headers to backend-1 and backend-2 using native kubernetes/istio? Am I forced to make application code changes to accomplish this?
Edit:
This is the error I get after applying my oidc policy. When I curl web-app with the auth token I get
{"errors":[{"code":"APP_ERROR_CODE","message":"401 Unauthorized"}
Note that when I curl backend-1 or backend-2 with the same auth-token I get the appropriate data. Also, there is no other destination rule/policy applied to these services currently, policy enforcement is on, and my istio version is 1.1.15.
This is the policy I am applying:
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: default
namespace: default
spec:
# peers:
# - mtls: {}
origins:
- jwt:
issuer: "http://10.148.199.140:8080/auth/realms/test"
jwksUri: "http://10.148.199.140:8080/auth/realms/test/protocol/openid-connect/certs"
principalBinding: USE_ORIGIN
should the token be propagated to backend-1 and backend-2 without any other changes?
Yes, policy should transfer token to both backend-1 and backend-2
There is a github issue , where users had same issue like You
A few informations from there:
The JWT is verified by an Envoy filter, so you'll have to check the Envoy logs. For the code, see https://github.com/istio/proxy/tree/master/src/envoy/http/jwt_auth
Pilot retrieves the JWKS to be used by the filter (it is inlined into the Envoy config), you can find the code for that in pilot/pkg/security
And another problem with that in stackoverflow
where accepted answer is:
The problem was resolved with two options: 1. Replace Service Name and port by external server ip and external port (for issuer and jwksUri) 2. Disable the usage of mTLS and its policy (Known issue: https://github.com/istio/istio/issues/10062).
From istio documentation
For each service, Istio applies the narrowest matching policy. The order is: service-specific > namespace-wide > mesh-wide. If more than one service-specific policy matches a service, Istio selects one of them at random. Operators must avoid such conflicts when configuring their policies.
To enforce uniqueness for mesh-wide and namespace-wide policies, Istio accepts only one authentication policy per mesh and one authentication policy per namespace. Istio also requires mesh-wide and namespace-wide policies to have the specific name default.
If a service has no matching policies, both transport authentication and origin authentication are disabled.
Istio supports header propagation. Probably didn't support when this thread was created.
You can allow the original header to be forwarded by using forwardOriginalToken: true in JWTRules or forward a valid JWT payload using outputPayloadToHeader in JWTRules.
Reference: ISTIO JWTRule documentation
I've configured a NGINX Ingress to use SSL. This works fine, but I'm trying to use hostAliases to route all requests from one domain back to my cluster and its failing with the following error:
Error: unable to verify the first certificate
My alias:
hostAliases:
- ip: "MY.CLUSTER.IP"
hostnames:
- "my.domain.com"
Is there a way to use this aliasing and still get ssl working?
According to this host alias is just a record in /etc/hosts file, which overwrites "usual" DNS resolution. If I understand your issue correctly, in this case you just need to have valid TLS certificate for "my.domain.com" installed on MY.CLUSTER.IP
Hope it helps.
A typical ingress with TLS configuration is like below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
- secretName: testsecret
backend:
serviceName: s1
servicePort: 80
By default, the load balancer will talk to backend service in HTTP. Can I configure Ingress so that the communication between load balancer and the backend service is also HTTPS?
Update:
Found GLBC talking about enabling HTTPS backend for GCE Ingress. Excerpt from the document:
"Backend HTTPS
For encrypted communication between the load balancer and your Kubernetes service, you need to decorate the service's port as expecting HTTPS. There's an alpha Service annotation for specifying the expected protocol per service port. Upon seeing the protocol as HTTPS, the ingress controller will assemble a GCP L7 load balancer with an HTTPS backend-service with a HTTPS health check."
It is not clear if the load balancer accepts 3rd party signed server certificate, self-signed, or both. How CA cert should be configured on load balancer to do backend server authentication. Or it will bypass authentication check.
On ingress-nginx you can use the nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" annotation and point it to the HTTPS port on the service.
This is ingress controller specific and other ingress controllers might not provide the option (or provide an option that works differently)
With this setup, the ingress controller decrypts the traffic. (This allows the ingress controller to control things like ciphers and the certificate presented to the user and do path-based routing, which SSL passthrough does not allow)
It is possible to configure certificate validation with serveral other (ingress-nginx-specific) annotations:
Docs
nginx.ingress.kubernetes.io/proxy-ssl-verify (it defaults to "off")
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth
nginx.ingress.kubernetes.io/proxy-ssl-ciphers (ciphers, not validation related)
nginx.ingress.kubernetes.io/proxy-ssl-name (Override the name that the cert is checked against)
nginx.ingress.kubernetes.io/proxy-ssl-protocols (SSL / TLS versions)
nginx.ingress.kubernetes.io/proxy-ssl-server-name (SNI passthrough)
Follow the instructions in "Backend HTTPS" section of GLBC,
GCP HTTP(S) load balancer will build a HTTPS connection with the backend, traffic will be encrypted. There is no need to configure CA certificate on LB side (Actually you can't). This implies the load balancer will skip server certificate authentication.
You should enable SSL-Passthrough config for the ingress or load balancer.
I suggest you using nginx ingress and kube-lego for SSL.
with this combination, you can use the ssl-passthrough config.
Nginx ingress for k8s
kube-lego for generating SSL certificate on the fly
Guidance for enable ssl-passthrough config