I've been trying for the last 3 days to setup cert-manager on a K8S cluster (v1.19.8) in an OpenStack environment with 1 master and 2 nodes.
It worked before (like 1 month ago), but since I re-created the cluster, pod ACME challenges cannot be created due to this error:
Status:
Presented: false
Processing: true
Reason: pods "cm-acme-http-solver-" is forbidden: PodSecurityPolicy: unable to admit pod: []
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 8m25s cert-manager Challenge scheduled for processing
Warning PresentError 3m18s (x7 over 8m23s) cert-manager Error presenting challenge: pods "cm-acme-http-solver-" is forbidden: PodSecurityPolicy: unable to admit pod: []
I've tried different versions of the ingress-nginx, different versions of cert-manager, different versions of k8s, but to no avail. I'm getting crazy..., please help. Many thanks :)
Cluster setup
kubectl create namespace ingress-nginx && \
helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx && \
kubectl create namespace cert-manager && \
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.1.0 \
--set installCRDs=true
Issuer
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email#example.com
preferredChain: "ISRG Root X1"
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- host.com
secretName: the-secret-name
rules:
- host: host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-nginx
port:
number: 80
After some debugging and much help from the hosting provider, we found the problem and the solution.
We were using the latest (from master) version of Magnum/OpenStack, which got an update that installed by default a PodSecurityPolicy controller. That prevented ACME pods to be created by cert-manager.
Recreating the cluster without a policy controller solved the issue:
openstack coe cluster create \
--cluster-template v1.kube1.20.4 \
--labels \
admission_control_list="NodeRestriction,NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,PersistentVolumeClaimResize,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,RuntimeClass" \
--merge-labels
...
A year late, but adding another solution in case it helps others finding this. I had the same issue of the challenge pod being blocked by PSP, but really didn't want to have to recreate/reconfigure my cluster, so I eventually solved the issue by adding this to the helm chart values.yaml:
https://github.com/cert-manager/cert-manager/blob/master/deploy/charts/cert-manager/values.yaml
global:
podSecurityPolicy:
enabled: true
useAppArmor: false
In my case, this is part of a Gitlab deployment so I added it under the certmanager key, as follows:
certmanager:
install: true
global:
podSecurityPolicy:
enabled: true
useAppArmor: false
(tags for search: gitlab helm chart certmanager PodSecurityPolicy "unable to admit pod" blocked)
Related
I'm trying to set up a K3s cluster. When I had a single master and agent setup cert-manager had no issues. Now I'm trying a 2 master setup with embedded etcd. I opened TCP ports 6443 and 2379-2380 for both VMs and did the following:
VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
VM1 Ready control-plane,etcd,master 130m v1.22.7+k3s1
VM2 Ready control-plane,etcd,master 128m v1.22.7+k3s1
Installing cert-manager works fine:
# k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME READY STATUS
cert-manager-b4d6fd99b-c6fpc 1/1 Running
cert-manager-cainjector-74bfccdfdf-gtmrd 1/1 Running
cert-manager-webhook-65b766b5f8-brb76 1/1 Running
My manifest has the following definition:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: info#example.org
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- selector: {}
http01:
ingress: {}
Which results in the following error:
# k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded
I tried disabling both firewalls, waiting a day, reset and re-setup, but the error persists. Google hasn't been much help either. The little info I can find goes over my head for the most part and no tutorial seems to do any extra steps.
Try to specify the proper ingress class name in your Cluster Issuer, like this:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: info#example.org
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- http01:
ingress:
class: nginx
Also, make sure that you have the cert manager annotation and the tls secret name specified in your Ingress like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
...
spec:
tls:
- hosts:
- domain.com
secretName: letsencrypt-account-key
A good starting point for troubleshooting issues with the webhook can be found int the docs, e.g. there is a section for problems on GKE private clusters.
In my case, however, this didn't really solve the problem. For me the issue was that when I played around with cert-manager I happen to install and uninstall it multiple times. It turned out that just removing the namespace, e.g. kubectl delete namespace cert-manager didn't remove the webhooks and other non-obvious resources.
Following the official guide for uninstalling cert-manager and applying the manifests again solved the issue.
I do this it, and work for me.
helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.8.0
--set webhook.securePort=10260
source: https://hackmd.io/#maelvls/debug-cert-manager-webhook
I'm trying to add a self-signed certificate in my AKS cluster using Cert-Manager.
I created a ClusterIssuer for the CA certificate (to sign the certificate) and a second ClusterIssuer for the Certificate (self-signed) I want to use.
I am not sure if the certificate2 is being used correctly by Ingress as it looks like it is waiting for some event.
Am I following the correct way to do this?
This is the first ClusterIssuer "clusterissuer.yml":
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
This is the CA certificate "certificate.yml":
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate
spec:
secretName: hello-deployment-tls-ca-key-pair
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: true
issuerRef:
name: selfsigned
kind: ClusterIssuer
This is the second ClusterIssuer "clusterissuer2.yml" for the certificate I want to use:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: hello-deployment-tls
spec:
ca:
secretName: hello-deployment-tls-ca-key-pair
and finally this is the self-signed certificate "certificate2.yml":
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate2
spec:
secretName: hello-deployment-tls-ca-key-pair2
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: false
issuerRef:
name: hello-deployment-tls
kind: ClusterIssuer
I am using this certificate in an Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "hello-deployment-tls"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: sonar-ingress
spec:
tls:
- secretName: "hello-deployment-tls-ca-key-pair2"
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: sonarqube
servicePort: 80
As I do not have any registered domain name I just want to use the public IP to access the service over https://<Public_IP>.
When I access to the service https://<Public_IP> I can see that "Kubernetes Ingress Controller Fake Certificate" so i guess this is because the certificate is not globally recognize by the browser.
The strange thing is here. Theoretically the Ingress deployment is using the selfsigned-certificate2 but looks like it is not ready:
kubectl get certificate
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 4h29m
selfsigned-certificate2 False hello-deployment-tls-ca-key-pair2 3h3m
selfsigned-secret True selfsigned-secret 5h25m
kubectl describe certificate selfsigned-certificate2
.
.
.
Spec:
Dns Names:
*.default.svc.cluster.local
*.default.com
Issuer Ref:
Kind: ClusterIssuer
Name: hello-deployment-tls
Secret Name: hello-deployment-tls-ca-key-pair2
Status:
Conditions:
Last Transition Time: 2021-10-15T11:16:15Z
Message: Waiting for CertificateRequest "selfsigned-certificate2-3983093525" to complete
Reason: InProgress
Status: False
Type: Ready
Events: <none>
Any idea?
Thank you in advance.
ApiVersions
First I noticed you're using v1alpha2 apiVersion which is depricated and will be removed in 1.6 cert-manager:
$ kubectl apply -f cluster-alpha.yaml
Warning: cert-manager.io/v1alpha2 ClusterIssuer is deprecated in v1.4+, unavailable in v1.6+; use cert-manager.io/v1 ClusterIssuer
I used apiVersion: cert-manager.io/v1 in reproduction.
Same for v1beta1 ingress, consider updating it to networking.k8s.io/v1.
What happens
I started reproducing your setup step by step.
I applied clusterissuer.yaml:
$ kubectl apply -f clusterissuer.yaml
clusterissuer.cert-manager.io/selfsigned created
$ kubectl get clusterissuer
NAME READY AGE
selfsigned True 11s
Pay attention that READY is set to True.
Next I applied certificate.yaml:
$ kubectl apply -f cert.yaml
certificate.cert-manager.io/selfsigned-certificate created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 7s
Next step is to add the second ClusterIssuer which is referenced to hello-deployment-tls-ca-key-pair secret:
$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls False 6s
selfsigned True 3m50
ClusterIssuer hello-deployment-tls is not ready. Here's why:
$ kubectl describe clusterissuer hello-deployment-tls
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrGetKeyPair 10s (x5 over 75s) cert-manager Error getting keypair for CA issuer: secret "hello-deployment-tls-ca-key-pair" not found
Warning ErrInitIssuer 10s (x5 over 75s) cert-manager Error initializing issuer: secret "hello-deployment-tls-ca-key-pair" not found
This is expected behaviour since:
When referencing a Secret resource in ClusterIssuer resources (eg
apiKeySecretRef) the Secret needs to be in the same namespace as the
cert-manager controller pod. You can optionally override this by using
the --cluster-resource-namespace argument to the controller.
Reference
Answer - how to move forward
I edited the cert-manager deployment so it will look for secrets in default namespace (this is not ideal, I'd use issuer instead in default namespace):
$ kubectl edit deploy cert-manager -n cert-manager
spec:
containers:
- args:
- --v=2
- --cluster-resource-namespace=default
It takes about a minute for cert-manager to start. Redeployed clusterissuer2.yaml:
$ kubectl delete -f clusterissuer2.yaml
clusterissuer.cert-manager.io "hello-deployment-tls" deleted
$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls True 3s
selfsigned True 5m42s
Both are READY. Moving forward with certificate2.yaml:
$ kubectl apply -f cert2.yaml
certificate.cert-manager.io/selfsigned-certificate2 created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 33s
selfsigned-certificate2 True hello-deployment-tls-ca-key-pair2 6s
$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
selfsigned-certificate-jj98f True True selfsigned system:serviceaccount:cert-manager:cert-manager 52s
selfsigned-certificate2-jwq5c True True hello-deployment-tls system:serviceaccount:cert-manager:cert-manager 25s
Ingress
When host is not added to ingress, it doesn't create any certificates and seems to used some fake one from ingress which is issued by CN = Kubernetes Ingress Controller Fake Certificate.
Events from ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 5s cert-manager TLS entry 0 is invalid: secret "example-cert" for ingress TLS has no hosts specified
When I added DNS to ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 4s cert-manager Successfully created Certificate "example-cert"
Answer, part 2 (about ingress, certificates and issuer)
You don't need to create a certificate if you're referencing to issuer in ingress rule. Ingress will issue certificate for you when all details are presented, such as:
annotation cert-manager.io/cluster-issuer: "hello-deployment-tls"
spec.tls part with host within
spec.rules.host
OR
if you want to create certificate manually and ask ingress to use it, then:
remove annotation cert-manager.io/cluster-issuer: "hello-deployment-tls"
create certificate manually
refer to it in ingress rule.
You can check certificate details in browser and find that it no longer has issuer as CN = Kubernetes Ingress Controller Fake Certificate, in my case it's empty.
Note - cert-manager v1.4
Initially I used a bit outdated cert-manager v1.4 and got this issue which has gone after updating to 1.4.1.
It looks like:
$ kubectl describe certificaterequest selfsigned-certificate2-45k2c
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal cert-manager.io 41s cert-manager Certificate request has been approved by cert-manager.io
Warning DecodeError 41s cert-manager Failed to decode returned certificate: error decoding certificate PEM block
Useful links:
Setting up self-singed Issuer
Setting up CA issuers
Cluster Issuers
There were a namespace "sandbox" on the node which was deleted, but there is still a challenge for a certificate "echo-tls".
But i can not access anymore sandbox namespace to delete this cert.
Could anyone help me deleting this resource ?
Here are the logs of the cert-manager :
Found status change for Certificate "echo-tls" condition "Ready": "True" -> "False"; setting lastTransitionTime to...
cert-manager/controller/CertificateReadiness "msg"="re-queuing item due to error processing" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"echo-tls\": StorageError: invalid object, Code: 4, Key: /cert-manager.io/certificates/sandbox/echo-tls, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ..., UID in object meta: " "key"="sandbox/echo-tls"
After restarting the pod cert-manager here are the logs :
cert-manager/controller/certificaterequests/handleOwnedResource "msg"="error getting referenced owning resource" "error"="certificaterequest.cert-manager.io \"echo-tls-bkmm8\" not found" "related_resource_kind"="CertificateRequest" "related_resource_name"="echo-tls-bkmm8" "related_resource_namespace"="sandbox" "resource_kind"="Order" "resource_name"="echo-tls-bkmm8-1177139468" "resource_namespace"="sandbox" "resource_version"="v1"
cert-manager/controller/orders "msg"="re-queuing item due to error processing" "error"="ACME client for issuer not initialised/available" "key"="sandbox/echo-tls-dwpt4-1177139468"
And then the same logs as before
The issuer :
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ***
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress: {}
The configs for deployment :
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: <APP_NAME>
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx-<ENV>
acme.cert-manager.io/http01-ingress-class: nginx-<ENV>
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- ***.fr
secretName: <APP_NAME>-tls
rules:
- host: ***.fr
http:
paths:
- backend:
serviceName: <APP_NAME>
servicePort: 80
.k8s_config: &k8s_config
before_script:
- export HOME=/tmp
- export K8S_NAMESPACE="${APP_NAME}"
- kubectl config set-cluster k8s --server="${K8S_SERVER}"
- kubectl config set clusters.k8s.certificate-authority-data ${K8S_CA_DATA}
- kubectl config set-credentials default --token="${K8S_USER_TOKEN}"
- kubectl config set-context default --cluster=k8s --user=default --namespace=default
- kubectl config set-context ${K8S_NAMESPACE} --cluster=k8s --user=default --namespace=${K8S_NAMESPACE}
- kubectl config use-context default
- if [ -z `kubectl get namespace ${K8S_NAMESPACE} --no-headers --output=go-template={{.metadata.name}} 2>/dev/null` ]; then kubectl create namespace ${K8S_NAMESPACE}; fi
- if [ -z `kubectl --namespace=${K8S_NAMESPACE} get secret *** --no-headers --output=go-template={{.metadata.name}} 2>/dev/null` ]; then kubectl get secret *** --output yaml | sed "s/namespace:\ default/namespace:\ ${K8S_NAMESPACE}/" | kubectl create -f - ; fi
- kubectl config use-context ${K8S_NAMESPACE}
Usually certificates are stored inside Kubernete secrets: https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. You can retrieve secrets using kubectl get secrets --all-namespaces. You can also check which secrets are used by a given pod by checking its yaml description: kubectl get pods -n <pod-namespace> -o yaml (additional informations: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod)
A namespace is cluster-wide, it is not located on any node. So deleting a node does not delete any namespace.
If above tracks does not fit your need, could you please provide some yaml files and some command-line instructions which would allow reproducing the problem?
Finally this sunday the cert-manger has stop challenges on the old tls without any other action.
I am working with cert-manager in my kubernetes cluster, in order to get certificates signed by let'sencrypt CA to my service application inside my cluster.
1. Create a cert-manager namespace
⟩ kubectl create namespace cert-manager
namespace/cert-manager created
2. I've created the CRDs that helm need to implement the CA and certificates functionality.
⟩ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
[I]
⟩
3. Disable resource validation on the cert-manager namespace
⟩ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
namespace/cert-manager labeled
[I]
4. Add the Jetstack Helm repository and update the local cache
⟩ helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[I]
~
⟩
⟩ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[I]
5. I've installed cert-manager inside my k8s cluster using helm:
helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.7.0 \
jetstack/cert-manager
6. I've created an ACME Issuer including http challenger provider to obtained by performing challenge validations against an ACME server such as Let’s Encrypt.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: b.garcia#possibilit.nl
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
Apply in the same namespace (default) in where is located my service application which I want to get the certificates.
⟩ kubectl apply -f 01-lets-encrypt-issuer-staging.yaml
issuer.certmanager.k8s.io/letsencrypt-staging created
⟩ kubectl get issuer --namespace default
NAME AGE
letsencrypt-staging 22s
This have the following description: We can see that the ACME account was registered with the ACME and the Status is True and Ready
⟩ kubectl describe issuer letsencrypt-staging --namespace default
Name: letsencrypt-staging
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"name":"letsencrypt-staging","namespace":"default...
API Version: certmanager.k8s.io/v1alpha1
Kind: Issuer
Metadata:
Creation Timestamp: 2019-03-13T10:12:01Z
Generation: 1
Resource Version: 247916
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/issuers/letsencrypt-staging
UID: 7170a66e-4578-11e9-b6d4-2aeecf80bb69
Spec:
Acme:
Email: b.garcia#myemail.com
Http 01:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/8550675
Conditions:
Last Transition Time: 2019-03-13T10:12:02Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
7. I've created the certificate in the same namespace in where the Issuer was created (default) and referencing it:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: zcrm365-lets-encrypt-staging
#namespace: default
spec:
secretName: zcrm365-lets-encrypt-staging-tls
issuerRef:
name: letsencrypt-staging
commonName: test1kongletsencrypt.possibilit.nl
# http01 challenge
acme:
config:
- http01:
ingressClass: nginx
# ingress: nginx # kong-ingress-controller # nginx
domains:
- test1kongletsencrypt.possibilit.nl
Apply the certificate
⟩ kubectl apply -f 02-certificate-staging.yaml
certificate.certmanager.k8s.io/zcrm365-lets-encrypt-staging created
I execute the kubectl describe certificate zcrm365-lets-encrypt-staging and I can see, the following:
⟩ kubectl describe certificate zcrm365-lets-encrypt-staging
Name: zcrm365-lets-encrypt-staging
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"zcrm365-lets-encrypt-staging","names...
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-13T19:32:25Z
Generation: 1
Resource Version: 321283
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/zcrm365-lets-encrypt-staging
UID: bad7f778-45c6-11e9-b6d4-2aeecf80bb69
Spec:
Acme:
Config:
Domains:
test1kongletsencrypt.possibilit.nl
Http 01:
Ingress Class: nginx
Common Name: test1kongletsencrypt.possibilit.nl
Issuer Ref:
Name: letsencrypt-staging
Secret Name: zcrm365-lets-encrypt-staging-tls
Status:
Conditions:
Last Transition Time: 2019-03-13T19:32:25Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events: <none>
We can see that the Status is False and the certificate issuance is temporary.
This certificate, create a secret named zcrm365-lets-encrypt-staging-tls which have my private key pair tls.crt and tls.key
⟩ kubectl describe secrets zcrm365-lets-encrypt-staging-tls
Name: zcrm365-lets-encrypt-staging-tls
Namespace: default
Labels: certmanager.k8s.io/certificate-name=zcrm365-lets-encrypt-staging
Annotations: certmanager.k8s.io/alt-names: test1kongletsencrypt.possibilit.nl
certmanager.k8s.io/common-name: test1kongletsencrypt.possibilit.nl
certmanager.k8s.io/ip-sans:
certmanager.k8s.io/issuer-kind: Issuer
certmanager.k8s.io/issuer-name: letsencrypt-staging
Type: kubernetes.io/tls
Data
====
ca.crt: 0 bytes
tls.crt: 1029 bytes
tls.key: 1679 bytes
8. Creating the ingress to my service application
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-staging"
certmanager.k8s.io/acme-challenge-type: http01
# certmanager.k8s.io/acme-http01-ingress-class: "true"
# kubernetes.io/tls-acme: true
# this annotation requires additional configuration of the
# ingress-shim (see above). Namely, a default issuer must
# be specified as arguments to the ingress-shim container.
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- backend:
serviceName: zcrm365dev
servicePort: 80
path: /
tls:
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: zcrm365-lets-encrypt-staging-tls
Apply the ingress
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 created
I can see our ingress
⟩ kubectl get ingress -n default
NAME HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-2m6gl test1kongletsencrypt.possibilit.nl 80 3h3m
kong-ingress-zcrm365 test1kongletsencrypt.possibilit.nl 52.166.60.158 80, 443 3h3m
[I]
The detail of my ingress is the following:
⟩ kubectl describe ingress cm-acme-http-solver-2m6gl
Name: cm-acme-http-solver-2m6gl
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/.well-known/acme-challenge/br0Y8eEsuZ5C2fKoeNVL2y03wn1ZHOQwKQCOOkyWabE cm-acme-http-solver-9cwhm:8089 (<none>)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
Events: <none>
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments)
---
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
zcrm365-lets-encrypt-staging-tls terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/issuer: letsencrypt-staging
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/acme-challenge-type":"http01","certmanager.k8s.io/issuer":"letsencrypt-staging"},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"zcrm365-lets-encrypt-staging-tls"}]}}
Events: <none>
When I perform all this, I can see that my application service is exposed via kong-ingress-zcrm365 ingress, because is reached with my test1kongletsencrypt.possibilit.nl domain.
But as you can see, I don't get the https certificate to my service. The https is an insecure connection
I've checked the logs of my cert-manager pod and I have the following:
kubectl logs pod/cert-manager-6f68b58796-hlszm -n cert-manager
I0313 19:40:39.254765 1 controller.go:206] challenges controller: syncing item 'default/zcrm365-lets-encrypt-staging-298918015-0'
I0313 19:40:39.254869 1 logger.go:103] Calling Discover
I0313 19:40:39.257720 1 pod.go:89] Found pod "default/cm-acme-http-solver-s6s2n" with acme-order-url annotation set to that of Certificate "default/zcrm365-lets-encrypt-staging-298918015-0"but it is not owned by the Certificate resource, so skipping it.
I0313 19:40:39.257735 1 pod.go:64] No existing HTTP01 challenge solver pod found for Certificate "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.286823 1 service.go:51] No existing HTTP01 challenge solver service found for Certificate "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.347204 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=919604798
I0313 19:40:39.347437 1 ingress.go:98] No existing HTTP01 challenge solver ingress found for Challenge "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.362118 1 controller.go:178] ingress-shim controller: syncing item 'default/cm-acme-http-solver-2m6gl'
I0313 19:40:39.362257 1 sync.go:64] Not syncing ingress default/cm-acme-http-solver-2m6gl as it does not contain necessary annotations
I0313 19:40:39.362958 1 controller.go:184] ingress-shim controller: Finished processing work item "default/cm-acme-http-solver-2m6gl"
I0313 19:40:39.362702 1 pod.go:89] Found pod "default/cm-acme-http-solver-s6s2n" with acme-order-url annotation set to that of Certificate "default/zcrm365-lets-encrypt-staging-298918015-0"but it is not owned by the Certificate resource, so skipping it.
I0313 19:40:39.363270 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=919604798
I0313 19:40:46.279269 1 controller.go:206] challenges controller: syncing item 'default/zcrm365-lets-encrypt-staging-tls-1561329142-0'
E0313 19:40:46.279324 1 controller.go:230] ch 'default/zcrm365-lets-encrypt-staging-tls-1561329142-0' in work queue no longer exists
I0313 19:40:46.279332 1 controller.go:212] challenges controller: Finished processing work item "default/zcrm365-lets-encrypt-staging-tls-1561329142-0"
[I]
I think that the http challenge process is not performed, because let'sencrypt not trust in that I am the owner of the https://test1kongletsencrypt.possibilit.nl/index.html domain.
How to can I solve this in order to get TLS with letsencrypt?
Is possible that do I need to use ingress-shim functionality in my helm cert-manager and/or WebhookValidation ?
IMPORTANT UPDATE
I am currently using kong-ingress-controller like ingress to my deployment.
I've installed of this way in this gist.
But I am not sure of how to can I integrate my kong-ingress-controller to work with cert-manager when I am creating my zcrm365-lets-encrypt-staging certificate signing request.
This is my current view of my kong resources
⟩ kubectl get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-7f66b99bb5-ldp4v 1/1 Running 0 2d16h
pod/kong-ingress-controller-667b4748d4-sptxm 1/2 Running 782 2d16h
pod/kong-migrations-h6qt2 0/1 Completed 0 2d16h
pod/konga-85b66cffff-6k6lt 1/1 Running 0 41h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-ingress-controller NodePort 10.0.48.131 <none> 8001:32257/TCP 2d16h
service/kong-proxy LoadBalancer 10.0.153.8 52.166.60.158 80:31577/TCP,443:32323/TCP 2d16h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kong 1 1 1 1 2d16h
deployment.apps/kong-ingress-controller 1 1 1 0 2d16h
deployment.apps/konga 1 1 1 1 41h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-7f66b99bb5 1 1 1 2d16h
replicaset.apps/kong-ingress-controller-667b4748d4 1 1 0 2d16h
replicaset.apps/konga-85b66cffff 1 1 1 41h
NAME COMPLETIONS DURATION AGE
job.batch/kong-migrations 1/1 86s 2d16h
The service service/kong-proxy provide me the external or public IP and when I create the kong-ingress-zcrm365, this ingress take that external IP address provided by kong-proxy. But of course in the ingress I am indicating that use nginx and not kong-ingress-controller.
And by the way I don't have installed nginx ingress controller, I am a little confuse here.
If someone can point me in the correct address, their help will be highly appreciated.
First check if using nginx ingress then nginx ingress controller is tunning
you are right track but have to added the ingress controller for ingress? if you are using the nginx ingress you have to add the controller in the K8s cluster.
your way and approach is perfect cert-manager and everything. here sharing the link of one tutorial check it out it is from digital ocean :
this link is same approch as you following just compare steps
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
if any issue drop comment for more
I would like to access my Kubernetes bare-metal cluster with an exposed Nginx Ingress Controller for TLS termination. To be able to automate certificate renewal, I would like to use the Kubernetes addon cert-manager, which is kube-lego's successor.
What I have done so far:
Set up a Kubernetes (v1.9.3) cluster on bare-metal (1 master, 1 minion, both running Ubuntu 16.04.4 LTS) with kubeadm and flannel as pod network following this guide.
Installed nginx-ingress (chart version 0.9.5) with Kubernetes package manager helm
helm install --name nginx-ingress --namespace kube-system stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true,controller.service.type=ClusterIP
Installed cert-manager (chart version 0.2.2) with helm
helm install --name cert-manager --namespace kube-system stable/cert-manager --set rbac.create=true
The Ingress Controller is exposed successfully and works as expected when I test with an Ingress resource. For proper Let's Encrypt certificate management and automatic renewal with cert-manager I do first of all need an Issuer resource. I created it from this acme-staging-issuer.yaml:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: default
spec:
acme:
server: https://acme-staging.api.letsencrypt.org/directory
email: email#example.com
privateKeySecretRef:
name: letsencrypt-staging
http01: {}
kubectl create -f acme-staging-issuer.yaml runs successfully but kubectl describe issuer/letsencrypt-staging gives me:
Status:
Acme:
Uri:
Conditions:
Last Transition Time: 2018-03-05T21:29:41Z
Message: Failed to verify ACME account: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Reason: ErrRegisterACMEAccount
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrVerifyACMEAccount 1s (x11 over 7s) cert-manager-controller Failed to verify ACME account: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Warning ErrInitIssuer 1s (x11 over 7s) cert-manager-controller Error initializing issuer: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Without a ready Issuer, I can not proceed to generate cert-manager Certificates or utilse the ingress-shim (for automatic renewal).
What am I missing in my setup? Is it sufficient to expose the ingress controller using hostNetwork=true or is there a better way to expose the its ports 80 and 443 on a bare-metal cluster? How can I resolve tls: oversized record received error when creating a cert-manager Issuer resource?
The tls: oversized record received error was caused by a misconfigured /etc/resolv.conf of the Kubernetes minion. It could be resolved by editing it like this:
$ sudo vi /etc/resolvconf/resolv.conf.d/base
Add nameserver list:
nameserver 8.8.8.8
nameserver 8.8.4.4
Update resolvconf:
$ sudo resolvconf -u