Waiting for http-01 challenge propagation: failed to perform self timed out - kubernetes

I setup a kubernetes cluster with currently two nodes and metallb as a loadbalancer.
Currently I would like to use an Ingress and secure it via ssl. For this I decided to use nginx ingress with cert-manager and put it on their site after the tutorial.
But now I get the error " Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://example.....zone/.well-known/acme-challenge/A5lFUj69fDccpXlvlyVw9-ekATEjt_-DKiJUzJSafxs': Get "http://example.....zone/.well-known/acme-challenge/A5lFUj69fDccpXlvlyVw9-ekATEjt_-DKiJUzJSafxs": dial tcp 94.130.150.125:80: connect: connection timed out
"
My current ClusterIssuer looks like this:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: letsencrypt#mymail.de
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
And I am trying to automatically provide a certificate for
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
cert-manager.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- example.....zone
secretName: example-...-zone-tls
rules:
- host: example.....zone
http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
Manually I can reach any address perfectly.

This one worked for me.
Change LoadBalancer in ingress-nginx service.
Add/Change externalTrafficPolicy: Cluster.
Reason being, pod with the certificate-issuer wound up on a different node than the load balancer did, so it couldn’t talk to itself through the ingress.
Below is complete block taken from https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.26.1/deploy/static/provider/cloud-generic.yaml
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
#CHANGE/ADD THIS
externalTrafficPolicy: Cluster
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
---

The error message you are getting can mean a wide variety of issues. However, there are some things you can check in order to fix it:
Delete the Ingress, the certificates and the cert-manager. After that add them all back to make sure it installs clean. Sometimes stale certs or bad/multi Ingress pathing might be the issue.
Make sure your traffic allows HTTP or has HTTPS with a trusted cert.
Check if hairpin mode of your loadbalancer and make sure it is working.
Add: nginx.ingress.kubernetes.io/ssl-redirect: "false" annotation to the Ingress rule. Wait a moment and see if valid cert will be created.
GKE only: add this Ingress annotation: certmanager.k8s.io/acme-http01-edit-in-place: "true".
Please let me know if that helped.

Related

How do I point Kubernetes Ingress to the Istio ingress gateway?

I have a currently functioning Istio application. I would now like to add HTTPS using the Google Cloud managed certs. I setup the ingress there like this...
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
namespace: istio-system
spec:
domains:
- mydomain.co
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: managed-cert
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: istio-ingressgateway
port:
number: 443
---
But when I try going to the site (https://mydomain.co) I get...
Secure Connection Failed
An error occurred during a connection to earth-615.mydomain.co. Cannot communicate securely with peer: no common encryption algorithm(s).
Error code: SSL_ERROR_NO_CYPHER_OVERLAP
The functioning virtual service/gateway looks like this...
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
annotations:
kubernetes.io/ingress.global-static-ip-name: earth-616
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http2
protocol: HTTP2
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: test-app
namespace: foo
spec:
hosts:
- "*"
gateways:
- "istio-system/ingress-gateway"
http:
- match:
- uri:
exact: /
route:
- destination:
host: test-app
port:
number: 8000
Pointing k8s ingress towards istio ingress would result in additional latency and additional requirement for the istio gateway to use ingress sni passthrough to accept the HTTPS (already TLS terminated traffic).
Instead the best practice here would be to use the certificate directly with istio Secure Gateway.
You can use the certificate and key issued by Google CA. e.g. from Certificate Authority Service and create a k8s secret to hold the certificate and key. Then configure istio Secure Gateway to terminate the TLS traffic as documented in here.

K8s Internal ACME server with cert-manager for issuing only internal k8s certs - htttp challenge issue

Is it possible to use cert-manager to generate a certificate for a workload only in a cluster with ACME server in one of the namespaces? As far I understood cert-manager tries to reach dns name via egressing the cluster and ingressing the cluster to make a http chalange, but what if I do not want to leave the cluster? I do not want cert-manager to create Ingress resource. Let the whole challenge takes place inside the cluster.
My case:
I've got ACME server (step-ca) inside one of my namespaces
I need to create certificate for my POD in another namespace, e.g. common name "${app}.${namespace}"
Remarks: In my case the problem is more complicated due to istio on board. For ingress traffic cert-manager works fine with internal ACME server but for egress traffic I need to go over stunnel (in each POD) to reach Squid outside and I need those certs for stunnel.
The only way I came up with:
Each app has it's own Issuer
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: k8s-acme-local
namespace: ${APP_NS}
spec:
acme:
email: some#email
privateKeySecretRef:
name: k8s-acme-local
server: https://${ACME_SVC}.${ACME_NS}/acme/acme/directory
solvers:
- http01:
ingress:
podTemplate:
metadata:
labels:
app: ${APP}
ingressTemplate: {}
serviceType: ClusterIP
Before creating Certificate resource I create Service to take over the traffic to ${APP}.{APP_NS}
apiVersion: v1
kind: Service
metadata:
name: ${APP}
namespace: ${APP_NS}
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8089
selector:
app: ${APP}
sessionAffinity: None
type: ClusterIP
And then the Certificate resource:
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: ${APP}
namespace: ${APP_NS}
spec:
secretName: ${APP}
issuerRef:
name: k8s-acme-local
kind: Issuer
group: cert-manager.io
commonName: ${APP}.${APP_NS}
dnsNames:
- ${APP}.${APP_NS}
Now Acme server will do the chalange over my Service not by Ingress (and cer-manager service) which stays unused. I don't like it but it works. This method has one critical drawback. Everyone in the cluster can do it and impersonate any existing or not existing app. I'm looking forward your opinions and tips.

GKE Managed Certificate not serving over HTTPS

I'm trying to spin up a Kubernetes cluster that I can access securely and can't seem to get that last part. I am following this tutorial: https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
Here are the .yaml files i'm using for my Ingress, Nodeport and ManagedCertificate
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: client-v1-cert
spec:
domains:
- api.mydomain.com
---
apiVersion: v1
kind: Service
metadata:
name: client-nodeport-service
spec:
selector:
app: myApp
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 3000
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-v1
networking.gke.io/managed-certificates: client-v1-cert
spec:
backend:
serviceName: client-nodeport-service
servicePort: 80
No errors that I can see in the GCP console. i can also access my API from http://api.mydomain.com/, but it won't work when I try https, just not https. Been banging my head on this for a few days and just wondering if there's some little thing i'm missing.
--- UPDATE ---
Output of kubectl describe managedcertificate
Name: client-v1-cert
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
API Version: networking.gke.io/v1beta1
Kind: ManagedCertificate
Metadata:
Creation Timestamp: 2019-07-01T17:42:43Z
Generation: 3
Resource Version: 1136504
Self Link: /apis/networking.gke.io/v1beta1/namespaces/default/managedcer
tificates/client-v1-cert
UID: b9b7bec1-9c27-33c9-a309-42284a800179
Spec:
Domains:
api.mydomain.com
Status:
Certificate Name: mcrt-286cdab3-b995-40cc-9b3a-28439285e694
Certificate Status: Active
Domain Status:
Domain: api.mydomain.com
Status: Active
Expire Time: 2019-09-29T09:55:12.000-07:00
Events: <none>
I figured out a solution to this problem. I ended up going into my GCP console, locating the load balancer associated with the Ingress, and then I noticed that there was only one frontend protocol, and it was HTTP serving over port 80. So I manually added another frontend protocol for HTTPS, selected the managed certificate from the list, and waited about 5 minutes and everything worked.
I have no idea why my ingress.yaml didn't do that automatically though. So though the problem is fixed if there is anyone out there who knows what I would love to know.

Istio: Ingress for ACME-challenge not working (503)

We are running Istio 1.1.3 on 1.12.5-gke.10 cluster-nodes.
We use certmanager for managing our let's encrypt certificates.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: certs.ourdomain.nl
namespace: istio-system
spec:
secretName: certs.ourdomain.nl
newBefore: 360h # 15d
commonName: operations.ourdomain.nl
dnsNames:
- operations.ourdomain.nl
issuerRef:
name: letsencrypt
kind: ClusterIssuer
acme:
config:
- http01:
ingressClass: istio
domains:
- operations.ourdomain.nl
Next thing we see the acme backend, service (nodeport and ingress) deployed. The ingress (auto-generated) looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "1734084804"
certmanager.k8s.io/acme-http-token: "1476005735"
name: cm-acme-http-solver-69vzw
namespace: istio-system
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Certificate
name: certs.ourdomain.nl
uid: 751011d2-4fc8-11e9-b20e-42010aa40101
spec:
rules:
- host: operations.ourdomain.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-fzk8q
servicePort: 8089
path: /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck
status:
loadBalancer: {}
However, when we try to access the url operations.ourdomain.nl /.well-known/acme-challenge/dnrcr-LRRMdXhBaUefjqpHQx8ytYuk-feEfXu9gW-Ck we get a 404.
We do have a loadbalancer for istio:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
labels:
app: istio-ingress
chart: gateways-1.1.0
heritage: Tiller
istio: ingress
release: istio
name: istio-ingress
namespace: istio-system
spec:
selector:
app: istio-ingress
servers:
- hosts:
- operations.ourdomain.nl
#port:
# name: http
# number: 80
# protocol: HTTP
#tls:
# httpsRedirect: true
- hosts:
- operations.ourdomain.nl
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: certs.ourdomain.nl
mode: SIMPLE
privateKey: sds
serverCertificate: sds
This interesting article gives a good insight in how the acme-challenge is supposed to work. For purpose of testing we have removed the port 80 and redirect to https in our custom gateway. We have added the autogenerated k8s gateway, listening only on port 80.
Istio is supposed to create a virtualservice for the acme-challenge. This seems to be happening, because now, when we request the acme-challenge url we get a 503: upstream connect error or disconnect/reset before headers. I believe this means the request gets to the gateway and is matched by a virtualservice, but there is no service / healthy pod to revert the traffic to.
We do see some possibly interesting logging in the istio-pilot:
“ProxyStatus”: {“endpoint_no_pod”:
{“cm-acme-http-solver-l5j2g.istio-system.svc.cluster.local”:
{“message”: “10.16.57.248”}
I have double checked and the service mentioned above does have a pod it is exposing. So I am not sure whether this line is relevant to this issue.
The acme-challenge pods do not have an istio-sidecar. Could this be the issue? If so: why does it apparently work for others

Unable to use cert-manager and nginx ingress controller with SSL termination

I am trying out nginx-ingress on GKE with SSL termination for use cases. I've traveled to millions of blogs on this process which uses cert-manager with nginx ingress controller but none of them worked in my case.
This certainly means I am doing something wrong. But I am not sure what. Here's what I did:
Create sample app exposed on ClusterIP
Deploy nginx-ingress
Create issuer
Create nginx ingress with issuer.
Result:
After describing the nginx ingress, the events areas shows none. That means everything is completely blank. Not a single thing happened for requesting certs, http validation, etc.
nginx ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubernetes.io/tls-acme: 'true'
spec:
rules:
-
host: wptls.ml
http: {paths: [{path: /, backend: {serviceName: web, servicePort: 80}}]}
tls:
-
secretName: tls-staging-cert
hosts: [wptls.ml]
clusterissuer.yml:
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: 'https://acme-staging-v02.api.letsencrypt.org/directory'
email: xyz#gmail.com
privateKeySecretRef:
name: letsencrypt-sec-staging
http01: {}
I am not sure if there's anything else which needs to be done.
Try Ingress extra annotation likes
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/