Cert-Manager k8s, Renewing certificate process - kubernetes

I have installed cert manager on a k8s cluster:
helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --version v1.5.3 --set installCRDs=true
My objective is to do mtls communication between micro-services running in same name-space.
For this purpose I have created a ca issuer .i.e..
kubectl get issuer -n sandbox -o yaml
apiVersion: v1
items:
- apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"cert-manager.io/v1","kind":"Issuer","metadata":{"annotations":{},"name":"ca-issuer","namespace":"sandbox"},"spec":{"ca":{"secretName":"tls-internal-ca"}}}
creationTimestamp: "2021-09-16T17:24:58Z"
generation: 1
managedFields:
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:ca:
.: {}
f:secretName: {}
manager: HashiCorp
operation: Update
time: "2021-09-16T17:24:58Z"
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:conditions: {}
manager: controller
operation: Update
time: "2021-09-16T17:24:58Z"
name: ca-issuer
namespace: sandbox
resourceVersion: "3895820"
selfLink: /apis/cert-manager.io/v1/namespaces/sandbox/issuers/ca-issuer
uid: 90f0c811-b78d-4346-bb57-68bf607ee468
spec:
ca:
secretName: tls-internal-ca
status:
conditions:
message: Signing CA verified
observedGeneration: 1
reason: KeyPairVerified
status: "True"
type: Ready
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Using this ca issuer, I have created certificates for my two micro-service .i.e.
kubectl get certificate -n sandbox
NAME READY SECRET Age
service1-certificate True service1-certificate 3d
service2-certificate True service2-certificate 2d23h
which is configured as
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
annotations:
meta.helm.sh/release-name: service1
meta.helm.sh/release-namespace: sandbox
creationTimestamp: "2021-09-17T10:20:21Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
f:spec:
.: {}
f:commonName: {}
f:dnsNames: {}
f:duration: {}
f:issuerRef:
.: {}
f:kind: {}
f:name: {}
f:renewBefore: {}
f:secretName: {}
f:subject:
.: {}
f:organizations: {}
f:usages: {}
manager: Go-http-client
operation: Update
time: "2021-09-17T10:20:21Z"
- apiVersion: cert-manager.io/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:privateKey: {}
f:status:
.: {}
f:conditions: {}
f:notAfter: {}
f:notBefore: {}
f:renewalTime: {}
f:revision: {}
manager: controller
operation: Update
time: "2021-09-20T05:14:12Z"
name: service1-certificate
namespace: sandbox
resourceVersion: "5177051"
selfLink: /apis/cert-manager.io/v1/namespaces/sandbox/certificates/service1-certificate
uid: 0cf1ea65-92a1-4b03-944e-b847de2c80d9
spec:
commonName: example.com
dnsNames:
- service1
duration: 24h0m0s
issuerRef:
kind: Issuer
name: ca-issuer
renewBefore: 12h0m0s
secretName: service1-certificate
subject:
organizations:
- myorg
usages:
- client auth
- server auth
status:
conditions:
- lastTransitionTime: "2021-09-20T05:14:13Z"
message: Certificate is up to date and has not expired
observedGeneration: 1
reason: Ready
status: "True"
type: Ready
notAfter: "2021-09-21T05:14:13Z"
notBefore: "2021-09-20T05:14:13Z"
renewalTime: "2021-09-20T17:14:13Z"
revision: 5
Now as you could see in the configuration I have configured to renew them in 12 hours. However, the secrets created via this custom certificate resource are still aged to two days (the first it was created). I was thinking this tls secret will be renewed via cert manager each day) .i.e.
kubectl get secrets service1-certificate service2-certificate -n sandbox -o wide
NAME TYPE DATA AGE
service1-certificate kubernetes.io/tls 3 2d23h
service2-certificate kubernetes.io/tls 3 3d1h
Is there is some wrong in my understanding? In the certmangager pod logs I do see some error around renewing .i.e.
I0920 05:14:04.649158 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key”=“sandbox/service1-certificate" "message"="Renewing certificate as renewal was scheduled at 2021-09-19 08:24:13 +0000 UTC" "reason"="Renewing"
I0920 05:14:04.649235 1 conditions.go:201] Setting lastTransitionTime for Certificate “service1-certificate" condition "Issuing" to 2021-09-20 05:14:04.649227766 +0000 UTC m=+87949.327215532
I0920 05:14:04.652174 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key"="sandbox/service2 "message"="Renewing certificate as renewal was scheduled at 2021-09-19 10:20:22 +0000 UTC" "reason"="Renewing"
I0920 05:14:04.652231 1 conditions.go:201] Setting lastTransitionTime for Certificate “service2-certificate" condition "Issuing" to 2021-09-20 05:14:04.652224302 +0000 UTC m=+87949.330212052
I0920 05:14:04.671111 1 conditions.go:190] Found status change for Certificate “service2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:04.671094596 +0000 UTC m=+87949.349082328
I0920 05:14:04.671344 1 conditions.go:190] Found status change for Certificate “service1-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:04.671332206 +0000 UTC m=+87949.349319948
 
I0920 05:14:12.703039 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox/service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service2-certificate\": the object has been modified; please apply your changes to the latest version and try again"
 
I0920 05:14:12.703896 1 conditions.go:190] Found status change for Certificate “service2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:12.7038803 +0000 UTC m=+87957.381868045
 
I0920 05:14:12.749502 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox/service1-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service1-certificate\": the object has been modified; please apply your changes to the latest version and try again"
 
I0920 05:14:12.750096 1 conditions.go:190] Found status change for Certificate “service1-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:12.750082572 +0000 UTC m=+87957.428070303
I0920 05:14:13.009032 1 controller.go:161] cert-manager/controller/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="sandbox/service1-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service1-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:13.117843 1 controller.go:161] cert-manager/controller/certificates-readiness "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox/service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service2-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:13.119366 1 conditions.go:190] Found status change for Certificate “service2-certificate" condition "Ready": "True" -> "False"; setting lastTransitionTime to 2021-09-20 05:14:13.119351795 +0000 UTC m=+87957.797339520
I0920 05:14:13.122820 1 controller.go:161] cert-manager/controller/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key”=“sandbox\service2-certificate" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \”service-certificate\": the object has been modified; please apply your changes to the latest version and try again"
I0920 05:14:13.123907 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “service2-certificate-t92qh" condition "Approved" to 2021-09-20 05:14:13.123896104 +0000 UTC m=+87957.801883833
I0920 05:14:13.248082 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “service1-certificate-p9stz" condition "Approved" to 2021-09-20 05:14:13.248071551 +0000 UTC m=+87957.926059296
I0920 05:14:13.253488 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “serivce2-certificate-t92qh" condition "Ready" to 2021-09-20 05:14:13.253474153 +0000 UTC m=+87957.931461871
I0920 05:14:13.388001 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest “service1-certificate-p9stz" condition "Ready" to 2021-09-20 05:14:13.387983783 +0000 UTC m=+87958.065971525


Short answer
Based on logs and details from certificate you provided it's safe to say it's working as expected.
Pay attention to revision: 5 in your certificate, which means that certificate has been renewed 4 times already. If you try to look there now, this will be 6 or 7 because certificate is updated every 12 hours.
Logs
First thing which can be really confusing is error messages in cert-manager pod. This is mostly noisy messages which are not really helpful by itself.
See about it here Github issue comment and here github issue 3667.
In case logs are really needed, verbosity level should be increased by setting args to --v=5 in the cert-manager deployment. To edit a deployment run following command:
kubectl edit deploy cert-manager -n cert-manager
How to check certificate/secret
When certificate is renewed, secret's and certificate's age are not changed, but content is edited, for instance resourceVersion in secret and revision in certificate.
Below are options to check if certificate was renewed:
Check this by getting secret in yaml before and after renew:
kubectl get secret example-certificate -o yaml > secret-before
And then run diff between them. It will be seen that tls.crt as well as resourceVersion is updated.
Look into certificate revision and dates in status
(I set duration to minimum possible 1h and renewBefore 55m, so it's updated every 5 minutes):
$ kubectl get cert example-cert -o yaml
notAfter: "2021-09-21T14:05:24Z"
notBefore: "2021-09-21T13:05:24Z"
renewalTime: "2021-09-21T13:10:24Z"
revision: 7
Check events in the namespace where certificate/secret are deployed:
$ kubectl get events
117s Normal Issuing certificate/example-cert The certificate has been successfully issued
117s Normal Reused certificate/example-cert Reusing private key stored in existing Secret resource "example-staging-certificate"
6m57s Normal Issuing certificate/example-cert Renewing certificate as renewal was scheduled at 2021-09-21 13:00:24 +0000 UTC
6m57s Normal Requested certificate/example-cert Created new CertificateRequest resource "example-cert-bs8g6"
117s Normal Issuing certificate/example-cert Renewing certificate as renewal was scheduled at 2021-09-21 13:05:24 +0000 UTC
117s Normal Requested certificate/example-cert Created new CertificateRequest resource "example-cert-7x8cf" UTC
Look at certificaterequests:
$ kubectl get certificaterequests
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
example-cert-2pxdd True True ca-issuer system:serviceaccount:cert-manager:cert-manager 14m
example-cert-54zzc True True ca-issuer system:serviceaccount:cert-manager:cert-manager 4m29s
example-cert-8vjcm True True ca-issuer system:serviceaccount:cert-manager:cert-manager 9m29s
Check logs in cert-manager pod to see four stages:
I0921 12:45:24.000726 1 trigger_controller.go:181] cert-manager/controller/certificates-trigger "msg"="Certificate must be re-issued" "key"="default/example-cert" "message"="Renewing certificate as renewal was scheduled at 2021-09-21 12:45:24 +0000 UTC" "reason"="Renewing"
I0921 12:45:24.000761 1 conditions.go:201] Setting lastTransitionTime for Certificate "example-cert" condition "Issuing" to 2021-09-21 12:45:24.000756621 +0000 UTC m=+72341.194879378
I0921 12:45:24.120503 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "example-cert-mxvbm" condition "Approved" to 2021-09-21 12:45:24.12049391 +0000 UTC m=+72341.314616684
I0921 12:45:24.154092 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "example-cert-mxvbm" condition "Ready" to 2021-09-21 12:45:24.154081971 +0000 UTC m=+72341.348204734
Note
Very important that not all issuers support duration and renewBefore flags. E.g. letsencrypt still doesn't work with it and have 90 default days.
Refence.

Related

Failing to issue TLS certificate with certificate manager in kubernetes & CloudFlare

We are trying to move our entire app infrastructure to Kubernetes and the last thing that is left to do is to configure TLS.
We are using Kubernetes on DigitalOcean and our DNS is on Cloudflare.
For simplicity, we decided to go with a wild card certificate and followed these docs to accomplish it.
Here is what we have until now.
Issuer
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: cloudflare-ambassador-wcard
# namespace: ambassador
spec:
# ACME issuer configuration:
# `email` - the email address to be associated with the ACME account (make sure it's a valid one).
# `server` - the URL used to access the ACME server’s directory endpoint.
# `privateKeySecretRef` - Kubernetes Secret to store the automatically generated ACME account private key.
acme:
email: alex#priz.guru
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: cloudflare-ambassador-wcard-private
# List of challenge solvers that will be used to solve ACME challenges for the matching domains.
solvers:
- dns01:
cloudflare:
email: <my email - same as in Cloudflare>
apiKeySecretRef:
name: cloudflare-api-token-secret
key: api-token
selector:
dnsNames:
- 'app.priz.guru'
- 'appp.priz.guru'
- 'api.priz.guru'
Certificate
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: priz-guru-cert
# Cert-Manager will put the resulting Secret in the same Kubernetes namespace as the Certificate.
namespace: ambassador
spec:
# Secret name to create, where the private key and certificate should be stored.
secretName: priz-guru-cert
# What Issuer to use for getting the certificate.
issuerRef:
name: cloudflare-ambassador-wcard
kind: ClusterIssuer
group: cert-manager.io
# Common name to be used on the Certificate.
commonName: "*.priz.guru"
# List of DNS subjectAltNames to be set on the Certificate.
dnsNames:
- 'appp.priz.guru'
- 'api.priz.guru'
Cloudflare API token is configured in secrets.
When I try getting the certificate info, I see that it is not ready (for a very long time):
$ kubectl get certificate priz-guru-cert -n ambassador
NAME READY SECRET AGE
priz-guru-cert False priz-guru-cert 12h
and looking at the logs, it seems like the certificate was approved, but it was not issued because there was another attempt within the last hour.
$ kubectl logs -l app=cert-manager,app.kubernetes.io/component=controller -n cert-manager
I0212 20:56:39.469298 1 trigger_controller.go:160] cert-manager/certificates-trigger "msg"="Not re-issuing certificate as an attempt has been made in the last hour" "key"="ambassador/priz-guru-cert" "retry_delay"=3599530749932
I0212 21:56:39.000546 1 trigger_controller.go:181] cert-manager/certificates-trigger "msg"="Certificate must be re-issued" "key"="ambassador/priz-guru-cert" "message"="Issuing certificate as Secret does not exist" "reason"="DoesNotExist"
I0212 21:56:39.000597 1 conditions.go:190] Found status change for Certificate "priz-guru-cert" condition "Issuing": "False" -> "True"; setting lastTransitionTime to 2022-02-12 21:56:39.000591396 +0000 UTC m=+131965.533360318
I0212 21:56:39.569514 1 issuing_controller.go:265] cert-manager/certificates-issuing "msg"="Found a failed CertificateRequest from previous issuance, waiting for it to be deleted..." "key"="ambassador/priz-guru-cert" "resource_kind"="CertificateRequest" "resource_name"="priz-guru-cert-z2cnw" "resource_namespace"="ambassador" "resource_version"="v1"
I0212 21:56:39.578311 1 controller.go:161] cert-manager/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="ambassador/priz-guru-cert" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"priz-guru-cert\": the object has been modified; please apply your changes to the latest version and try again"
I0212 21:56:39.601182 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "priz-guru-cert-9blj7" condition "Approved" to 2022-02-12 21:56:39.601171113 +0000 UTC m=+131966.133940054
I0212 21:56:39.618109 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "priz-guru-cert-9blj7" condition "Ready" to 2022-02-12 21:56:39.61810027 +0000 UTC m=+131966.150869178
I0212 21:56:39.635005 1 conditions.go:190] Found status change for Certificate "priz-guru-cert" condition "Issuing": "True" -> "False"; setting lastTransitionTime to 2022-02-12 21:56:39.63499362 +0000 UTC m=+131966.167762571
I0212 21:56:39.647445 1 trigger_controller.go:160] cert-manager/certificates-trigger "msg"="Not re-issuing certificate as an attempt has been made in the last hour" "key"="ambassador/priz-guru-cert" "retry_delay"=3599352583741
I0212 21:56:39.668068 1 trigger_controller.go:160] cert-manager/certificates-trigger "msg"="Not re-issuing certificate as an attempt has been made in the last hour" "key"="ambassador/priz-guru-cert" "retry_delay"=3599331976662
Tried deleting everything and reapplying (after more than an hour). Same result.
How can I see what is the issue here? Are we even configuring it correctly?
UPDATE
Here is the description of the same certificate.
$ kubectl describe certificate priz-guru-cert -n ambassador
Name: priz-guru-cert
Namespace: ambassador
Labels: kustomize.toolkit.fluxcd.io/name=flux-system
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: kustomize.toolkit.fluxcd.io/checksum: 8a875fb65b8d2a0d1ca76e552d21dca509e81ab7
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2022-02-12T09:56:21Z
Generation: 2
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:kustomize.toolkit.fluxcd.io/checksum:
f:labels:
.:
f:kustomize.toolkit.fluxcd.io/name:
f:kustomize.toolkit.fluxcd.io/namespace:
f:spec:
.:
f:commonName:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:secretName:
Manager: kustomize-controller
Operation: Update
Time: 2022-02-12T09:56:21Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:lastFailureTime:
Manager: controller
Operation: Update
Time: 2022-02-12T22:56:39Z
Resource Version: 7912521
UID: deedc903-dc40-4e32-a4e8-91765cb33347
Spec:
Common Name: *.priz.guru
Dns Names:
appp.priz.guru
api.priz.guru
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: cloudflare-ambassador-wcard
Secret Name: priz-guru-cert
Status:
Conditions:
Last Transition Time: 2022-02-12T09:56:21Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 2
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2022-02-12T22:56:39Z
Message: The certificate request has failed to complete and will be retried: The CSR PEM requests a commonName that is not present in the list of dnsNames or ipAddresses. If a commonName is set, ACME requires that the value is also present in the list of dnsNames or ipAddresses: "*.priz.guru" does not exist in [appp.priz.guru api.priz.guru] or []
Observed Generation: 2
Reason: Failed
Status: False
Type: Issuing
Last Failure Time: 2022-02-12T22:56:39Z
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 46m (x15 over 13h) cert-manager Issuing certificate as Secret does not exist
Warning Failed 46m (x14 over 13h) cert-manager The certificate request has failed to complete and will be retried: The CSR PEM requests a commonName that is not present in the list of dnsNames or ipAddresses. If a commonName is set, ACME requires that the value is also present in the list of dnsNames or ipAddresses: "*.priz.guru" does not exist in [appp.priz.guru api.priz.guru] or []
Normal Generated 46m cert-manager Stored new private key in temporary Secret resource "priz-guru-cert-dmp62"
Normal Requested 46m cert-manager Created new CertificateRequest resource "priz-guru-cert-gmhfs"
UPDATE - Trying to address "The CSR PEM requests a commonName that is not present in the list of dnsNames or ipAddresses"
Now the certificate config is:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: priz-guru-cert
# Cert-Manager will put the resulting Secret in the same Kubernetes namespace as the Certificate.
namespace: ambassador
spec:
# Secret name to create, where the private key and certificate should be stored.
secretName: priz-guru-cert
# What Issuer to use for getting the certificate.
issuerRef:
name: cloudflare-ambassador-wcard
kind: ClusterIssuer
group: cert-manager.io
# Common name to be used on the Certificate.
# commonName: "*.priz.guru"
# List of DNS subjectAltNames to be set on the Certificate.
dnsNames:
- '*.priz.guru'
Still not issuing with the following:
$ kubectl describe certificate priz-guru-cert -n ambassador
Name: priz-guru-cert
Namespace: ambassador
Labels: kustomize.toolkit.fluxcd.io/name=flux-system
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: kustomize.toolkit.fluxcd.io/checksum: 5208aadd2a6d21e1d6f2f2dfc3f8d29a63990962
API Version: cert-manager.io/v1
Kind: Certificate
Metadata:
Creation Timestamp: 2022-02-12T09:56:21Z
Generation: 7
Managed Fields:
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:kustomize.toolkit.fluxcd.io/checksum:
f:labels:
.:
f:kustomize.toolkit.fluxcd.io/name:
f:kustomize.toolkit.fluxcd.io/namespace:
f:spec:
.:
f:dnsNames:
f:issuerRef:
.:
f:group:
f:kind:
f:name:
f:secretName:
Manager: kustomize-controller
Operation: Update
Time: 2022-02-12T23:58:13Z
API Version: cert-manager.io/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:lastFailureTime:
f:nextPrivateKeySecretName:
Manager: controller
Operation: Update
Time: 2022-02-12T23:59:15Z
Resource Version: 7933741
UID: deedc903-dc40-4e32-a4e8-91765cb33347
Spec:
Dns Names:
*.priz.guru
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: cloudflare-ambassador-wcard
Secret Name: priz-guru-cert
Status:
Conditions:
Last Transition Time: 2022-02-12T09:56:21Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 7
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2022-02-12T23:59:14Z
Message: Issuing certificate as Secret does not exist
Observed Generation: 6
Reason: DoesNotExist
Status: True
Type: Issuing
Last Failure Time: 2022-02-12T23:58:14Z
Next Private Key Secret Name: priz-guru-cert-kkjxx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 34m cert-manager Stored new private key in temporary Secret resource "priz-guru-cert-xx4kz"
Normal Requested 34m cert-manager Created new CertificateRequest resource "priz-guru-cert-lgcwt"
Warning Failed 34m cert-manager The certificate request has failed to complete and will be retried: Failed to wait for order resource "priz-guru-cert-lgcwt-3857888143" to become ready: order is in "errored" state: Failed to create Order: 400 urn:ietf:params:acme:error:malformed: Error creating new order :: Domain name "api.priz.guru" is redundant with a wildcard domain in the same request. Remove one or the other from the certificate request.
Normal Generated 30m cert-manager Stored new private key in temporary Secret resource "priz-guru-cert-q89c8"
Normal Requested 30m cert-manager Created new CertificateRequest resource "priz-guru-cert-wgqwf"
Normal Requested 22m cert-manager Created new CertificateRequest resource "priz-guru-cert-v4dj4"
Warning Failed 22m cert-manager The certificate request has failed to complete and will be retried: Failed to wait for order resource "priz-guru-cert-v4dj4-887490919" to become ready: order is in "errored" state: Failed to create Order: 400 urn:ietf:params:acme:error:malformed: Error creating new order :: Domain name "api.priz.guru" is redundant with a wildcard domain in the same request. Remove one or the other from the certificate request.
Normal Issuing 21m (x18 over 14h) cert-manager Issuing certificate as Secret does not exist
Normal Generated 21m cert-manager Stored new private key in temporary Secret resource "priz-guru-cert-kkjxx"
Normal Requested 21m cert-manager Created new CertificateRequest resource "priz-guru-cert-9w7pr"
Normal Requested 3m41s cert-manager Created new CertificateRequest resource "priz-guru-cert-2ntk9"

Issue with Self-signed certificate with Cert-Manager in Kubernetes

I'm trying to add a self-signed certificate in my AKS cluster using Cert-Manager.
I created a ClusterIssuer for the CA certificate (to sign the certificate) and a second ClusterIssuer for the Certificate (self-signed) I want to use.
I am not sure if the certificate2 is being used correctly by Ingress as it looks like it is waiting for some event.
Am I following the correct way to do this?
This is the first ClusterIssuer "clusterissuer.yml":
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
This is the CA certificate "certificate.yml":
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate
spec:
secretName: hello-deployment-tls-ca-key-pair
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: true
issuerRef:
name: selfsigned
kind: ClusterIssuer
This is the second ClusterIssuer "clusterissuer2.yml" for the certificate I want to use:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: hello-deployment-tls
spec:
ca:
secretName: hello-deployment-tls-ca-key-pair
and finally this is the self-signed certificate "certificate2.yml":
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: selfsigned-certificate2
spec:
secretName: hello-deployment-tls-ca-key-pair2
dnsNames:
- "*.default.svc.cluster.local"
- "*.default.com"
isCA: false
issuerRef:
name: hello-deployment-tls
kind: ClusterIssuer
I am using this certificate in an Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "hello-deployment-tls"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
name: sonar-ingress
spec:
tls:
- secretName: "hello-deployment-tls-ca-key-pair2"
rules:
- http:
paths:
- pathType: Prefix
path: "/"
backend:
serviceName: sonarqube
servicePort: 80
As I do not have any registered domain name I just want to use the public IP to access the service over https://<Public_IP>.
When I access to the service https://<Public_IP> I can see that "Kubernetes Ingress Controller Fake Certificate" so i guess this is because the certificate is not globally recognize by the browser.
The strange thing is here. Theoretically the Ingress deployment is using the selfsigned-certificate2 but looks like it is not ready:
kubectl get certificate
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 4h29m
selfsigned-certificate2 False hello-deployment-tls-ca-key-pair2 3h3m
selfsigned-secret True selfsigned-secret 5h25m
kubectl describe certificate selfsigned-certificate2
.
.
.
Spec:
Dns Names:
*.default.svc.cluster.local
*.default.com
Issuer Ref:
Kind: ClusterIssuer
Name: hello-deployment-tls
Secret Name: hello-deployment-tls-ca-key-pair2
Status:
Conditions:
Last Transition Time: 2021-10-15T11:16:15Z
Message: Waiting for CertificateRequest "selfsigned-certificate2-3983093525" to complete
Reason: InProgress
Status: False
Type: Ready
Events: <none>
Any idea?
Thank you in advance.
ApiVersions
First I noticed you're using v1alpha2 apiVersion which is depricated and will be removed in 1.6 cert-manager:
$ kubectl apply -f cluster-alpha.yaml
Warning: cert-manager.io/v1alpha2 ClusterIssuer is deprecated in v1.4+, unavailable in v1.6+; use cert-manager.io/v1 ClusterIssuer
I used apiVersion: cert-manager.io/v1 in reproduction.
Same for v1beta1 ingress, consider updating it to networking.k8s.io/v1.
What happens
I started reproducing your setup step by step.
I applied clusterissuer.yaml:
$ kubectl apply -f clusterissuer.yaml
clusterissuer.cert-manager.io/selfsigned created
$ kubectl get clusterissuer
NAME READY AGE
selfsigned True 11s
Pay attention that READY is set to True.
Next I applied certificate.yaml:
$ kubectl apply -f cert.yaml
certificate.cert-manager.io/selfsigned-certificate created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 7s
Next step is to add the second ClusterIssuer which is referenced to hello-deployment-tls-ca-key-pair secret:
$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls False 6s
selfsigned True 3m50
ClusterIssuer hello-deployment-tls is not ready. Here's why:
$ kubectl describe clusterissuer hello-deployment-tls
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrGetKeyPair 10s (x5 over 75s) cert-manager Error getting keypair for CA issuer: secret "hello-deployment-tls-ca-key-pair" not found
Warning ErrInitIssuer 10s (x5 over 75s) cert-manager Error initializing issuer: secret "hello-deployment-tls-ca-key-pair" not found
This is expected behaviour since:
When referencing a Secret resource in ClusterIssuer resources (eg
apiKeySecretRef) the Secret needs to be in the same namespace as the
cert-manager controller pod. You can optionally override this by using
the --cluster-resource-namespace argument to the controller.
Reference
Answer - how to move forward
I edited the cert-manager deployment so it will look for secrets in default namespace (this is not ideal, I'd use issuer instead in default namespace):
$ kubectl edit deploy cert-manager -n cert-manager
spec:
containers:
- args:
- --v=2
- --cluster-resource-namespace=default
It takes about a minute for cert-manager to start. Redeployed clusterissuer2.yaml:
$ kubectl delete -f clusterissuer2.yaml
clusterissuer.cert-manager.io "hello-deployment-tls" deleted
$ kubectl apply -f clusterissuer2.yaml
clusterissuer.cert-manager.io/hello-deployment-tls created
$ kubectl get clusterissuer
NAME READY AGE
hello-deployment-tls True 3s
selfsigned True 5m42s
Both are READY. Moving forward with certificate2.yaml:
$ kubectl apply -f cert2.yaml
certificate.cert-manager.io/selfsigned-certificate2 created
$ kubectl get cert
NAME READY SECRET AGE
selfsigned-certificate True hello-deployment-tls-ca-key-pair 33s
selfsigned-certificate2 True hello-deployment-tls-ca-key-pair2 6s
$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
selfsigned-certificate-jj98f True True selfsigned system:serviceaccount:cert-manager:cert-manager 52s
selfsigned-certificate2-jwq5c True True hello-deployment-tls system:serviceaccount:cert-manager:cert-manager 25s
Ingress
When host is not added to ingress, it doesn't create any certificates and seems to used some fake one from ingress which is issued by CN = Kubernetes Ingress Controller Fake Certificate.
Events from ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 5s cert-manager TLS entry 0 is invalid: secret "example-cert" for ingress TLS has no hosts specified
When I added DNS to ingress:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreateCertificate 4s cert-manager Successfully created Certificate "example-cert"
Answer, part 2 (about ingress, certificates and issuer)
You don't need to create a certificate if you're referencing to issuer in ingress rule. Ingress will issue certificate for you when all details are presented, such as:
annotation cert-manager.io/cluster-issuer: "hello-deployment-tls"
spec.tls part with host within
spec.rules.host
OR
if you want to create certificate manually and ask ingress to use it, then:
remove annotation cert-manager.io/cluster-issuer: "hello-deployment-tls"
create certificate manually
refer to it in ingress rule.
You can check certificate details in browser and find that it no longer has issuer as CN = Kubernetes Ingress Controller Fake Certificate, in my case it's empty.
Note - cert-manager v1.4
Initially I used a bit outdated cert-manager v1.4 and got this issue which has gone after updating to 1.4.1.
It looks like:
$ kubectl describe certificaterequest selfsigned-certificate2-45k2c
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal cert-manager.io 41s cert-manager Certificate request has been approved by cert-manager.io
Warning DecodeError 41s cert-manager Failed to decode returned certificate: error decoding certificate PEM block
Useful links:
Setting up self-singed Issuer
Setting up CA issuers
Cluster Issuers

cert-manager Found pod with acme-order-url annotation set to that of Certificate, but it is not owned by the Certificate resource

I am working with cert-manager in my kubernetes cluster, in order to get certificates signed by let'sencrypt CA to my service application inside my cluster.
1. Create a cert-manager namespace
⟩ kubectl create namespace cert-manager
namespace/cert-manager created
2. I've created the CRDs that helm need to implement the CA and certificates functionality.
⟩ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
[I]
⟩
3. Disable resource validation on the cert-manager namespace
⟩ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
namespace/cert-manager labeled
[I]
4. Add the Jetstack Helm repository and update the local cache
⟩ helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[I]
~
⟩
⟩ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[I]
5. I've installed cert-manager inside my k8s cluster using helm:
helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.7.0 \
jetstack/cert-manager
6. I've created an ACME Issuer including http challenger provider to obtained by performing challenge validations against an ACME server such as Let’s Encrypt.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: b.garcia#possibilit.nl
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
Apply in the same namespace (default) in where is located my service application which I want to get the certificates.
⟩ kubectl apply -f 01-lets-encrypt-issuer-staging.yaml
issuer.certmanager.k8s.io/letsencrypt-staging created
⟩ kubectl get issuer --namespace default
NAME AGE
letsencrypt-staging 22s
This have the following description: We can see that the ACME account was registered with the ACME and the Status is True and Ready
⟩ kubectl describe issuer letsencrypt-staging --namespace default
Name: letsencrypt-staging
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"name":"letsencrypt-staging","namespace":"default...
API Version: certmanager.k8s.io/v1alpha1
Kind: Issuer
Metadata:
Creation Timestamp: 2019-03-13T10:12:01Z
Generation: 1
Resource Version: 247916
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/issuers/letsencrypt-staging
UID: 7170a66e-4578-11e9-b6d4-2aeecf80bb69
Spec:
Acme:
Email: b.garcia#myemail.com
Http 01:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/8550675
Conditions:
Last Transition Time: 2019-03-13T10:12:02Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
7. I've created the certificate in the same namespace in where the Issuer was created (default) and referencing it:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: zcrm365-lets-encrypt-staging
#namespace: default
spec:
secretName: zcrm365-lets-encrypt-staging-tls
issuerRef:
name: letsencrypt-staging
commonName: test1kongletsencrypt.possibilit.nl
# http01 challenge
acme:
config:
- http01:
ingressClass: nginx
# ingress: nginx # kong-ingress-controller # nginx
domains:
- test1kongletsencrypt.possibilit.nl
Apply the certificate
⟩ kubectl apply -f 02-certificate-staging.yaml
certificate.certmanager.k8s.io/zcrm365-lets-encrypt-staging created
I execute the kubectl describe certificate zcrm365-lets-encrypt-staging and I can see, the following:
⟩ kubectl describe certificate zcrm365-lets-encrypt-staging
Name: zcrm365-lets-encrypt-staging
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"zcrm365-lets-encrypt-staging","names...
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-13T19:32:25Z
Generation: 1
Resource Version: 321283
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/zcrm365-lets-encrypt-staging
UID: bad7f778-45c6-11e9-b6d4-2aeecf80bb69
Spec:
Acme:
Config:
Domains:
test1kongletsencrypt.possibilit.nl
Http 01:
Ingress Class: nginx
Common Name: test1kongletsencrypt.possibilit.nl
Issuer Ref:
Name: letsencrypt-staging
Secret Name: zcrm365-lets-encrypt-staging-tls
Status:
Conditions:
Last Transition Time: 2019-03-13T19:32:25Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events: <none>
We can see that the Status is False and the certificate issuance is temporary.
This certificate, create a secret named zcrm365-lets-encrypt-staging-tls which have my private key pair tls.crt and tls.key
⟩ kubectl describe secrets zcrm365-lets-encrypt-staging-tls
Name: zcrm365-lets-encrypt-staging-tls
Namespace: default
Labels: certmanager.k8s.io/certificate-name=zcrm365-lets-encrypt-staging
Annotations: certmanager.k8s.io/alt-names: test1kongletsencrypt.possibilit.nl
certmanager.k8s.io/common-name: test1kongletsencrypt.possibilit.nl
certmanager.k8s.io/ip-sans:
certmanager.k8s.io/issuer-kind: Issuer
certmanager.k8s.io/issuer-name: letsencrypt-staging
Type: kubernetes.io/tls
Data
====
ca.crt: 0 bytes
tls.crt: 1029 bytes
tls.key: 1679 bytes
8. Creating the ingress to my service application
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-staging"
certmanager.k8s.io/acme-challenge-type: http01
# certmanager.k8s.io/acme-http01-ingress-class: "true"
# kubernetes.io/tls-acme: true
# this annotation requires additional configuration of the
# ingress-shim (see above). Namely, a default issuer must
# be specified as arguments to the ingress-shim container.
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- backend:
serviceName: zcrm365dev
servicePort: 80
path: /
tls:
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: zcrm365-lets-encrypt-staging-tls
Apply the ingress
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 created
I can see our ingress
⟩ kubectl get ingress -n default
NAME HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-2m6gl test1kongletsencrypt.possibilit.nl 80 3h3m
kong-ingress-zcrm365 test1kongletsencrypt.possibilit.nl 52.166.60.158 80, 443 3h3m
[I]
The detail of my ingress is the following:
⟩ kubectl describe ingress cm-acme-http-solver-2m6gl
Name: cm-acme-http-solver-2m6gl
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/.well-known/acme-challenge/br0Y8eEsuZ5C2fKoeNVL2y03wn1ZHOQwKQCOOkyWabE cm-acme-http-solver-9cwhm:8089 (<none>)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
Events: <none>
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments)
---
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
zcrm365-lets-encrypt-staging-tls terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/issuer: letsencrypt-staging
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/acme-challenge-type":"http01","certmanager.k8s.io/issuer":"letsencrypt-staging"},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"zcrm365-lets-encrypt-staging-tls"}]}}
Events: <none>
When I perform all this, I can see that my application service is exposed via kong-ingress-zcrm365 ingress, because is reached with my test1kongletsencrypt.possibilit.nl domain.
But as you can see, I don't get the https certificate to my service. The https is an insecure connection
I've checked the logs of my cert-manager pod and I have the following:
kubectl logs pod/cert-manager-6f68b58796-hlszm -n cert-manager
I0313 19:40:39.254765 1 controller.go:206] challenges controller: syncing item 'default/zcrm365-lets-encrypt-staging-298918015-0'
I0313 19:40:39.254869 1 logger.go:103] Calling Discover
I0313 19:40:39.257720 1 pod.go:89] Found pod "default/cm-acme-http-solver-s6s2n" with acme-order-url annotation set to that of Certificate "default/zcrm365-lets-encrypt-staging-298918015-0"but it is not owned by the Certificate resource, so skipping it.
I0313 19:40:39.257735 1 pod.go:64] No existing HTTP01 challenge solver pod found for Certificate "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.286823 1 service.go:51] No existing HTTP01 challenge solver service found for Certificate "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.347204 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=919604798
I0313 19:40:39.347437 1 ingress.go:98] No existing HTTP01 challenge solver ingress found for Challenge "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.362118 1 controller.go:178] ingress-shim controller: syncing item 'default/cm-acme-http-solver-2m6gl'
I0313 19:40:39.362257 1 sync.go:64] Not syncing ingress default/cm-acme-http-solver-2m6gl as it does not contain necessary annotations
I0313 19:40:39.362958 1 controller.go:184] ingress-shim controller: Finished processing work item "default/cm-acme-http-solver-2m6gl"
I0313 19:40:39.362702 1 pod.go:89] Found pod "default/cm-acme-http-solver-s6s2n" with acme-order-url annotation set to that of Certificate "default/zcrm365-lets-encrypt-staging-298918015-0"but it is not owned by the Certificate resource, so skipping it.
I0313 19:40:39.363270 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=919604798
I0313 19:40:46.279269 1 controller.go:206] challenges controller: syncing item 'default/zcrm365-lets-encrypt-staging-tls-1561329142-0'
E0313 19:40:46.279324 1 controller.go:230] ch 'default/zcrm365-lets-encrypt-staging-tls-1561329142-0' in work queue no longer exists
I0313 19:40:46.279332 1 controller.go:212] challenges controller: Finished processing work item "default/zcrm365-lets-encrypt-staging-tls-1561329142-0"
[I]
I think that the http challenge process is not performed, because let'sencrypt not trust in that I am the owner of the https://test1kongletsencrypt.possibilit.nl/index.html domain.
How to can I solve this in order to get TLS with letsencrypt?
Is possible that do I need to use ingress-shim functionality in my helm cert-manager and/or WebhookValidation ?
IMPORTANT UPDATE
I am currently using kong-ingress-controller like ingress to my deployment.
I've installed of this way in this gist.
But I am not sure of how to can I integrate my kong-ingress-controller to work with cert-manager when I am creating my zcrm365-lets-encrypt-staging certificate signing request.
This is my current view of my kong resources
⟩ kubectl get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-7f66b99bb5-ldp4v 1/1 Running 0 2d16h
pod/kong-ingress-controller-667b4748d4-sptxm 1/2 Running 782 2d16h
pod/kong-migrations-h6qt2 0/1 Completed 0 2d16h
pod/konga-85b66cffff-6k6lt 1/1 Running 0 41h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-ingress-controller NodePort 10.0.48.131 <none> 8001:32257/TCP 2d16h
service/kong-proxy LoadBalancer 10.0.153.8 52.166.60.158 80:31577/TCP,443:32323/TCP 2d16h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kong 1 1 1 1 2d16h
deployment.apps/kong-ingress-controller 1 1 1 0 2d16h
deployment.apps/konga 1 1 1 1 41h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-7f66b99bb5 1 1 1 2d16h
replicaset.apps/kong-ingress-controller-667b4748d4 1 1 0 2d16h
replicaset.apps/konga-85b66cffff 1 1 1 41h
NAME COMPLETIONS DURATION AGE
job.batch/kong-migrations 1/1 86s 2d16h
The service service/kong-proxy provide me the external or public IP and when I create the kong-ingress-zcrm365, this ingress take that external IP address provided by kong-proxy. But of course in the ingress I am indicating that use nginx and not kong-ingress-controller.
And by the way I don't have installed nginx ingress controller, I am a little confuse here.
If someone can point me in the correct address, their help will be highly appreciated.
First check if using nginx ingress then nginx ingress controller is tunning
you are right track but have to added the ingress controller for ingress? if you are using the nginx ingress you have to add the controller in the K8s cluster.
your way and approach is perfect cert-manager and everything. here sharing the link of one tutorial check it out it is from digital ocean :
this link is same approch as you following just compare steps
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
if any issue drop comment for more

How to setup letsencrypt cert issuer for kubernetes on AWS EKS with Terraform

I'm trying to setup letsencrypt cert-issuer on kubernetes cluster. My terraform looks like this:
resource "helm_release" "cert_manager" {
keyring = ""
name = "cert-manager"
chart = "stable/cert-manager"
namespace = "kube-system"
depends_on = ["helm_release.ingress"]
set {
name = "webhook.enabled"
value = "false"
}
provisioner "local-exec" {
command = "kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.6/deploy/manifests/00-crds.yaml"
}
provisioner "local-exec" {
command = "kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} label namespace kube-system certmanager.k8s.io/disable-validation=\"true\" --overwrite"
}
provisioner "local-exec" {
command = <<EOT
cat <<EOF | kubectl --server=${aws_eks_cluster.demo.endpoint} --insecure-skip-tls-verify=true --token=${data.aws_eks_cluster_auth.demo.token} create -f -
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: mymail#gmail.com
privateKeySecretRef:
name: letsencrypt
http01: {}
EOF
EOT
}
}
I have simple test pod and service deployed. When I go to http://<cluster-address>/apple it responds with apple. So I try to create ingress for it:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
labels:
app: apple
heritage: Tiller
release: apple
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
tls:
- hosts:
- my.domain.alias.to.cluster.address.io
secretName: my.domain.alias.to.cluster.address.io
But still, when I go to https://my.domain.alias.to.cluster.address.io/apple my browser warns me, and I can see the certificate is Kubernetes Ingress Controller Fake Certificate.
What am I missing? What should I do to have cert created by letsencrypt there?
UPDATE:
Logs from my cert-manager pod:
I0220 16:34:49.071883 1 sync.go:180] Certificate "my.domain.alias.to.cluster.address.io" for ingress "example-ingress" is up to date
I0220 16:34:49.072121 1 controller.go:179] ingress-shim controller: Finished processing work item "default/example-ingress"
I0220 16:34:49.071454 1 controller.go:145] certificates controller: syncing item 'default/my.domain.alias.to.cluster.address.io'
I0220 16:34:49.073892 1 helpers.go:183] Setting lastTransitionTime for Certificate "my.domain.alias.to.cluster.address.io" condition "Ready" to 2019-02-20 16:34:49.073885527 +0000 UTC m=+889.175312552
I0220 16:34:49.074450 1 sync.go:263] Certificate default/my.domain.alias.to.cluster.address.io scheduled for renewal in 1438h47m42.92555861s
I0220 16:34:49.081224 1 controller.go:151] certificates controller: Finished processing work item "default/my.domain.alias.to.cluster.address.io"
I0220 16:34:49.081479 1 controller.go:173] ingress-shim controller: syncing item 'default/example-ingress'
I0220 16:34:49.081567 1 sync.go:177] Certificate "my.domain.alias.to.cluster.address.io" for ingress "example-ingress" already exists
I0220 16:34:49.081631 1 sync.go:180] Certificate "my.domain.alias.to.cluster.address.io" for ingress "example-ingress" is up to date
I0220 16:34:49.081672 1 controller.go:179] ingress-shim controller: Finished processing work item "default/example-ingress"
I0220 16:34:49.081743 1 controller.go:145] certificates controller: syncing item 'default/my.domain.alias.to.cluster.address.io'
I0220 16:34:49.082384 1 sync.go:263] Certificate default/my.domain.alias.to.cluster.address.io scheduled for renewal in 1438h47m42.917624001s
I0220 16:34:49.087552 1 controller.go:151] certificates controller: Finished processing work item "default/my.domain.alias.to.cluster.address.io"
I0220 16:35:04.571789 1 controller.go:173] ingress-shim controller: syncing item 'default/example-ingress'
And this is what kubectl describe certificate my.domain.alias.to.cluster.address.io returns:
Name: my.domain.alias.to.cluster.address.io
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-02-20T16:34:49Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: example-ingress
UID: 709a55df-352d-11e9-bf9d-06ede39599be
Resource Version: 278211
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/my.domain.alias.to.cluster.address.io
UID: 709bf1bd-352d-11e9-b941-026486635030
Spec:
Acme:
Config:
Domains:
my.domain.alias.to.cluster.address.io
Http 01:
Ingress:
Ingress Class: nginx
Dns Names:
my.domain.alias.to.cluster.address.io
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt
Secret Name: my.domain.alias.to.cluster.address.io
Status:
Conditions:
Last Transition Time: 2019-02-20T16:34:49Z
Message: Certificate is up to date and has not expired
Reason: Ready
Status: True
Type: Ready
Not After: 2019-05-21T15:22:32Z
Events: <none>
In the logs of ingress controller I can find this:
I0220 16:22:34.428736 8 store.go:446] secret default/my.domain.alias.to.cluster.address.io was updated and it is used in ingress annotations. Parsing...
I0220 16:22:34.429898 8 backend_ssl.go:68] Adding Secret "default/my.domain.alias.to.cluster.address.io" to the local store
I0220 16:22:35.410950 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:22:35.522502 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:22:35 +0000]TCP200000.000
I0220 16:27:39.225810 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:27:39.226685 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"f2f0c9bd-345d-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277488", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/example-ingress
I0220 16:27:39.336879 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:27:39 +0000]TCP200000.001
I0220 16:27:53.090686 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"78ab0815-352c-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277520", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/example-ingress
I0220 16:27:53.091216 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:27:53.212854 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:27:53 +0000]TCP200000.000
I0220 16:28:04.566342 8 status.go:388] updating Ingress default/example-ingress status from [] to [{34.245.112.11 }]
I0220 16:28:04.576525 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"78ab0815-352c-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277542", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/example-ingress
I0220 16:28:05.676217 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"78ab0815-352c-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"277546", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/example-ingress
I0220 16:28:07.909830 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:28:08.019070 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:28:08 +0000]TCP200000.000
I0220 16:28:22.557334 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:28:22.557490 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"cm-acme-http-solver-dmnqh", UID:"7f8f4be4-3461-11e9-b941-026486635030", APIVersion:"extensions/v1beta1", ResourceVersion:"277576", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/cm-acme-http-solver-dmnqh
I0220 16:28:22.662971 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:28:22 +0000]TCP200000.000
I0220 16:34:49.057385 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"709a55df-352d-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"278207", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/example-ingress
I0220 16:34:49.057688 8 controller.go:172] Configuration changes detected, backend reload required.
I0220 16:34:49.175039 8 controller.go:190] Backend successfully reloaded.
[20/Feb/2019:16:34:49 +0000]TCP200000.000
I0220 16:35:04.565324 8 status.go:388] updating Ingress default/example-ingress status from [] to [{34.245.112.11 }]
I0220 16:35:04.572954 8 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"709a55df-352d-11e9-bf9d-06ede39599be", APIVersion:"extensions/v1beta1", ResourceVersion:"278236", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/example-ingress
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:18:38:33 +0000] "\x05\x01\x00" 400 157 "-" "-" 0 0.751 [] - - - - e0aec2a9e3e71e136a1c62939e341b49
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:18:39:50 +0000] "\x04\x01\x00P\x05\xBC\xD2\x0C\x00" 400 157 "-" "-" 0 0.579 [] - - - - 7f825a3ef2f94e200b14fe3691e4fdde
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:18:41:30 +0000] "GET http://5.188.210.12/echo.php HTTP/1.1" 400 657 "https://www.google.com/" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36" 359 0.000 [] - - - - 1167890a763ddc360051046c84a47d21
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:19:46:35 +0000] "GET /apple HTTP/1.1" 308 171 "-" "Mozilla/5.0 (Linux; Android 8.0.0; ONEPLUS A3003) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.105 Mobile Safari/537.36" 555 0.000 [default-apple-service-5678] - - - - b1f1bb0da3e465c3a54e963663dffb61
10.0.1.114 - [10.0.1.114] - - [20/Feb/2019:20:38:39 +0000] "\x03\x00\x00/*\xE0\x00\x00\x00\x00\x00Cookie: mstshash=Administr" 400 157 "-" "-" 0 0.065 [] - - - - cd420e70b3f78bee069f8bac97918e36
Basically, letsencrypt is not issuing the certificate for you so it's defaulting to the Fake cert. You need to make sure that my.domain.alias.to.cluster.address.io is publicly resolvable, say through a DNS server like 8.8.8.8 and then it needs to resolve to a publicly accessible IP address. You can debug what's happening by looking at the certmanager pod logs.
$ kubectl logs <certmanagerpod>
You can also see the details about the certificates (and you might be able to see why it didn't get issued).
$ kubectl get certificates
$ kubectl describe <certificate-name>
Another aspect is that you could be being rate-limited by https://acme-v02.api.letsencrypt.org/directory which is their prod environment. You could also try: https://acme-staging-v02.api.letsencrypt.org/directory which is their staging environment.
It turned out I was missing host in ingress rule. path is not enough if I want to use certificate.

Enabling ExpandPersistentVolumes

I need to resize a bunch of PVCs. It seems the easiest way to do it is through
the ExpandPersistentVolumes feature. I am however having trouble getting the
configuration to cooperate.
The ExpandPersistentVolumes feature gate is set in kubelet on all three
masters, as shown:
(output trimmed to relevant bits for sanity)
$ parallel-ssh -h /tmp/masters -P "ps aux | grep feature"
172.20.53.249: root 15206 7.4 0.5 619888 83952 ? Ssl 19:52 0:02 /opt/kubernetes/bin/kubelet --feature-gates=ExpandPersistentVolumes=true,ExperimentalCriticalPodAnnotation=true
[1] 12:53:08 [SUCCESS] 172.20...
172.20.58.111: root 17798 4.5 0.5 636280 87328 ? Ssl 19:51 0:04 /opt/kubernetes/bin/kubelet --feature-gates=ExpandPersistentVolumes=true,ExperimentalCriticalPodAnnotation=true
[2] 12:53:08 [SUCCESS] 172.20...
172.20.53.240: root 9287 4.0 0.5 645276 90528 ? Ssl 19:50 0:06 /opt/kubernetes/bin/kubelet --feature-gates=ExpandPersistentVolumes=true,ExperimentalCriticalPodAnnotation=true
[3] 12:53:08 [SUCCESS] 172.20..
The apiserver has the PersistentVolumeClaimResize admission controller, as shown:
$ kubectl --namespace=kube-system get pod -o yaml | grep -i admission
/usr/local/bin/kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
/usr/local/bin/kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
/usr/local/bin/kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
However, when I create or edit a storage class to add allowVolumeExpansion,
it is removed on save. For example:
$ cat new-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: null
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-2
selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2
parameters:
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:<omitted>
type: gp2
zone: us-west-2a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
allowVolumeExpansion: true
$ kubectl create -f new-sc.yaml
storageclass "gp2-2" created
$ kubectl get sc gp2-2 -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2018-05-22T20:00:17Z
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-2
resourceVersion: "2546166"
selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2-2
uid: <omitted>
parameters:
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:<omitted>
type: gp2
zone: us-west-2a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
What am I missing? What is erasing this key from my storageclass configuration?
EDIT: Here is the command used by the kube-apiserver pods. It does not say anything about feature gates. The cluster was launched using Kops.
- /bin/sh
- -c
- mkfifo /tmp/pipe; (tee -a /var/log/kube-apiserver.log < /tmp/pipe & ) ; exec
/usr/local/bin/kube-apiserver --address=127.0.0.1 --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,PersistentVolumeClaimResize,ResourceQuota
--allow-privileged=true --anonymous-auth=false --apiserver-count=3 --authorization-mode=RBAC
--basic-auth-file=/srv/kubernetes/basic_auth.csv --client-ca-file=/srv/kubernetes/ca.crt
--cloud-provider=aws --etcd-cafile=/srv/kubernetes/ca.crt --etcd-certfile=/srv/kubernetes/etcd-client.pem
--etcd-keyfile=/srv/kubernetes/etcd-client-key.pem --etcd-servers-overrides=/events#https://127.0.0.1:4002
--etcd-servers=https://127.0.0.1:4001 --insecure-port=8080 --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP
--proxy-client-cert-file=/srv/kubernetes/apiserver-aggregator.cert --proxy-client-key-file=/srv/kubernetes/apiserver-aggregator.key
--requestheader-allowed-names=aggregator --requestheader-client-ca-file=/srv/kubernetes/apiserver-aggregator-ca.cert
--requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group
--requestheader-username-headers=X-Remote-User --secure-port=443 --service-cluster-ip-range=100.64.0.0/13
--storage-backend=etcd3 --tls-cert-file=/srv/kubernetes/server.cert --tls-private-key-file=/srv/kubernetes/server.key
--token-auth-file=/srv/kubernetes/known_tokens.csv --v=1 > /tmp/pipe 2>&1
It could happen if you did not enable alpha feature-gate for the option.
Did you set --feature-gates option for kube-apiserver?
--feature-gates mapStringBool - A set of key=value pairs that describe feature gates for alpha/experimental features. Options are:
...
ExpandPersistentVolumes=true|false (ALPHA - default=false)
...
Update: If you don't see this option in the command line arguments, you need to add it (--feature-gates=ExpandPersistentVolumes=true).
In case you run kube-apiserver as a pod, you should edit /etc/kubernetes/manifests/kube-apiserver.yaml and add the feature-gate option to other arguments. kube-apiserver will restart automatically.
In case you run kube-apiserver as a process maintained by systemd, you should edit kube-apiserver.service or service options $KUBE_API_ARGS in a separate file, and append feature-gate option there. Restart the service with systemctl restart kube-apiserver.service command.
After enabling it, you can create a StorageClass object with allowVolumeExpansion option:
# kubectl get sc -o yaml --export
apiVersion: v1
items:
- allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: 2018-05-23T14:38:43Z
labels:
k8s-addon: storage-aws.addons.k8s.io
name: gp2-2
namespace: ""
resourceVersion: "1385"
selfLink: /apis/storage.k8s.io/v1/storageclasses/gp2-2
uid: fe516dcf-5e96-11e8-a86d-42010a9a0002
parameters:
encrypted: "true"
kmsKeyId: arn:aws:kms:us-west-2:<omitted>
type: gp2
zone: us-west-2a
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""