Kubectl delete tls when no namspace - kubernetes

There were a namespace "sandbox" on the node which was deleted, but there is still a challenge for a certificate "echo-tls".
But i can not access anymore sandbox namespace to delete this cert.
Could anyone help me deleting this resource ?
Here are the logs of the cert-manager :
Found status change for Certificate "echo-tls" condition "Ready": "True" -> "False"; setting lastTransitionTime to...
cert-manager/controller/CertificateReadiness "msg"="re-queuing item due to error processing" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"echo-tls\": StorageError: invalid object, Code: 4, Key: /cert-manager.io/certificates/sandbox/echo-tls, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ..., UID in object meta: " "key"="sandbox/echo-tls"
After restarting the pod cert-manager here are the logs :
cert-manager/controller/certificaterequests/handleOwnedResource "msg"="error getting referenced owning resource" "error"="certificaterequest.cert-manager.io \"echo-tls-bkmm8\" not found" "related_resource_kind"="CertificateRequest" "related_resource_name"="echo-tls-bkmm8" "related_resource_namespace"="sandbox" "resource_kind"="Order" "resource_name"="echo-tls-bkmm8-1177139468" "resource_namespace"="sandbox" "resource_version"="v1"
cert-manager/controller/orders "msg"="re-queuing item due to error processing" "error"="ACME client for issuer not initialised/available" "key"="sandbox/echo-tls-dwpt4-1177139468"
And then the same logs as before
The issuer :
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: ***
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress: {}
The configs for deployment :
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: <APP_NAME>
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx-<ENV>
acme.cert-manager.io/http01-ingress-class: nginx-<ENV>
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- ***.fr
secretName: <APP_NAME>-tls
rules:
- host: ***.fr
http:
paths:
- backend:
serviceName: <APP_NAME>
servicePort: 80
.k8s_config: &k8s_config
before_script:
- export HOME=/tmp
- export K8S_NAMESPACE="${APP_NAME}"
- kubectl config set-cluster k8s --server="${K8S_SERVER}"
- kubectl config set clusters.k8s.certificate-authority-data ${K8S_CA_DATA}
- kubectl config set-credentials default --token="${K8S_USER_TOKEN}"
- kubectl config set-context default --cluster=k8s --user=default --namespace=default
- kubectl config set-context ${K8S_NAMESPACE} --cluster=k8s --user=default --namespace=${K8S_NAMESPACE}
- kubectl config use-context default
- if [ -z `kubectl get namespace ${K8S_NAMESPACE} --no-headers --output=go-template={{.metadata.name}} 2>/dev/null` ]; then kubectl create namespace ${K8S_NAMESPACE}; fi
- if [ -z `kubectl --namespace=${K8S_NAMESPACE} get secret *** --no-headers --output=go-template={{.metadata.name}} 2>/dev/null` ]; then kubectl get secret *** --output yaml | sed "s/namespace:\ default/namespace:\ ${K8S_NAMESPACE}/" | kubectl create -f - ; fi
- kubectl config use-context ${K8S_NAMESPACE}

Usually certificates are stored inside Kubernete secrets: https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. You can retrieve secrets using kubectl get secrets --all-namespaces. You can also check which secrets are used by a given pod by checking its yaml description: kubectl get pods -n <pod-namespace> -o yaml (additional informations: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod)
A namespace is cluster-wide, it is not located on any node. So deleting a node does not delete any namespace.
If above tracks does not fit your need, could you please provide some yaml files and some command-line instructions which would allow reproducing the problem?

Finally this sunday the cert-manger has stop challenges on the old tls without any other action.

Related

How can I fix 'failed calling webhook "webhook.cert-manager.io"'?

I'm trying to set up a K3s cluster. When I had a single master and agent setup cert-manager had no issues. Now I'm trying a 2 master setup with embedded etcd. I opened TCP ports 6443 and 2379-2380 for both VMs and did the following:
VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
# k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
VM1 Ready control-plane,etcd,master 130m v1.22.7+k3s1
VM2 Ready control-plane,etcd,master 128m v1.22.7+k3s1
Installing cert-manager works fine:
# k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME READY STATUS
cert-manager-b4d6fd99b-c6fpc 1/1 Running
cert-manager-cainjector-74bfccdfdf-gtmrd 1/1 Running
cert-manager-webhook-65b766b5f8-brb76 1/1 Running
My manifest has the following definition:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: info#example.org
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- selector: {}
http01:
ingress: {}
Which results in the following error:
# k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded
I tried disabling both firewalls, waiting a day, reset and re-setup, but the error persists. Google hasn't been much help either. The little info I can find goes over my head for the most part and no tutorial seems to do any extra steps.
Try to specify the proper ingress class name in your Cluster Issuer, like this:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: info#example.org
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- http01:
ingress:
class: nginx
Also, make sure that you have the cert manager annotation and the tls secret name specified in your Ingress like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt
...
spec:
tls:
- hosts:
- domain.com
secretName: letsencrypt-account-key
A good starting point for troubleshooting issues with the webhook can be found int the docs, e.g. there is a section for problems on GKE private clusters.
In my case, however, this didn't really solve the problem. For me the issue was that when I played around with cert-manager I happen to install and uninstall it multiple times. It turned out that just removing the namespace, e.g. kubectl delete namespace cert-manager didn't remove the webhooks and other non-obvious resources.
Following the official guide for uninstalling cert-manager and applying the manifests again solved the issue.
I do this it, and work for me.
helm install
cert-manager jetstack/cert-manager
--namespace cert-manager
--create-namespace
--version v1.8.0
--set webhook.securePort=10260
source: https://hackmd.io/#maelvls/debug-cert-manager-webhook

Failed calling webhook "mutate.runner.actions.summerwind.dev": x509: certificate signed by unknown authority

Kubernetes: v1.19.9-gke.1900
Helm actions-runner-controller: 0.12.7
I have CRDs created by Github Actions Controller:
❯ kubectl api-resources | grep summerwind.dev
horizontalrunnerautoscalers actions.summerwind.dev/v1alpha1 true HorizontalRunnerAutoscaler
runnerdeployments actions.summerwind.dev/v1alpha1 true RunnerDeployment
runnerreplicasets actions.summerwind.dev/v1alpha1 true RunnerReplicaSet
runners actions.summerwind.dev/v1alpha1 true Runner
runnersets actions.summerwind.dev/v1alpha1 true RunnerSet
And also I have a sample file with two simplified resources: pod and runner
❯ cat test.yml
apiVersion: v1
kind: Pod
metadata:
name: pod-1
spec:
containers:
- name: main
image: busybox
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
metadata:
name: runner-1
spec:
organization: my-org
env: []
Now, when I run apply to both these resources, the Pod works good but the Runner fails:
❯ kubectl apply -f test.yml
pod/pod-1 created
Error from server (InternalError): error when creating "test.yml": Internal error occurred: failed calling webhook "mutate.runner.actions.summerwind.dev": Post "https://actions-runner-controller-webhook.tools.svc:443/mutate-actions-summerwind-dev-v1alpha1-runner?timeout=30s": x509: certificate signed by unknown authority
As you see, this call goes to the MutatingWebhookConfiguration. And this webhook sends request to the Controller that prints only:
❯ kubectl -n tools logs actions-runner-controller-6cd6fbdd56-qlzrd -c manager
...
http: TLS handshake error from 10.128.0.3:59736: remote error: tls: bad certificate
QUESTION:
What is the next step for troubleshooting?

K8S cert-manager error creating acme challenge pods

I've been trying for the last 3 days to setup cert-manager on a K8S cluster (v1.19.8) in an OpenStack environment with 1 master and 2 nodes.
It worked before (like 1 month ago), but since I re-created the cluster, pod ACME challenges cannot be created due to this error:
Status:
Presented: false
Processing: true
Reason: pods "cm-acme-http-solver-" is forbidden: PodSecurityPolicy: unable to admit pod: []
State: pending
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Started 8m25s cert-manager Challenge scheduled for processing
Warning PresentError 3m18s (x7 over 8m23s) cert-manager Error presenting challenge: pods "cm-acme-http-solver-" is forbidden: PodSecurityPolicy: unable to admit pod: []
I've tried different versions of the ingress-nginx, different versions of cert-manager, different versions of k8s, but to no avail. I'm getting crazy..., please help. Many thanks :)
Cluster setup
kubectl create namespace ingress-nginx && \
helm install ingress-nginx ingress-nginx/ingress-nginx -n ingress-nginx && \
kubectl create namespace cert-manager && \
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.1.0 \
--set installCRDs=true
Issuer
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: email#example.com
preferredChain: "ISRG Root X1"
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- host.com
secretName: the-secret-name
rules:
- host: host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-nginx
port:
number: 80
After some debugging and much help from the hosting provider, we found the problem and the solution.
We were using the latest (from master) version of Magnum/OpenStack, which got an update that installed by default a PodSecurityPolicy controller. That prevented ACME pods to be created by cert-manager.
Recreating the cluster without a policy controller solved the issue:
openstack coe cluster create \
--cluster-template v1.kube1.20.4 \
--labels \
admission_control_list="NodeRestriction,NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,PersistentVolumeClaimResize,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,RuntimeClass" \
--merge-labels
...
A year late, but adding another solution in case it helps others finding this. I had the same issue of the challenge pod being blocked by PSP, but really didn't want to have to recreate/reconfigure my cluster, so I eventually solved the issue by adding this to the helm chart values.yaml:
https://github.com/cert-manager/cert-manager/blob/master/deploy/charts/cert-manager/values.yaml
global:
podSecurityPolicy:
enabled: true
useAppArmor: false
In my case, this is part of a Gitlab deployment so I added it under the certmanager key, as follows:
certmanager:
install: true
global:
podSecurityPolicy:
enabled: true
useAppArmor: false
(tags for search: gitlab helm chart certmanager PodSecurityPolicy "unable to admit pod" blocked)

How to replace the "Kubernetes fake certificate" with a wildcard certificate (on bare metal private cloud) Nginx Ingress and cert manager

We have setup a Kubernetes cluster on our bare metal server.
We deploy our application where each namespace is an application for the end customer. ie customer1.mydomain.com -> namespace: cust1
We keep on getting the Kubernetes Ingress Controller Fake Certificate.
We have purchased our own wildcard certificates *.mydomain.com
#kubectl create secret tls OUR-SECRET --key /path/private.key --cert /path/chain.crt -n ingress-nginx
#kubectl create secret tls OUR-SECRET --key /path/private.key --cert /path/chain.crt -n kube-system
ingress.yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: ourcloud
namespace: cert-manager
spec:
secretName: oursecret
issuerRef:
name: letsencrypt-prod
commonName: '*.mydomain.com'
acme:
config:
- dns01:
provider: cf-dns-prod
domains:
- '*.mydomain.com'
kubectl apply -f ingress.yaml
certificate.certmanager.k8s.io/ourcloud created
https://cust1.mydomain.com connects with Kubernetes Ingress Controller Fake Certificate
I found the problem. I had the wrong filename in my yaml for the certificate files. Its allways good to look at the ingress logs
kubectl logs nginx-ingress-controller-689498bc7c-tf5 -n ingress-nginx
kubectl get -o yaml ingress --all-namespaces
Try to recreate the secrete from files and see if it works.
kubectl delete -n cust4 SECRETNAME
kubectl -n cust4 create secret tls SECRETENAME --key key.key --cert cert.crt
If you are using Helm and cert manager, make sure each ingress resource has a different certificate name, these values are usually set from the values file in a helm chart.
tls
- secretName: <give certificate name>
hosts: example.com
You can check the certificates available using to avoid name collision if you have successfully deployed your ingress resources:
kubectl get certificates

cert-manager Found pod with acme-order-url annotation set to that of Certificate, but it is not owned by the Certificate resource

I am working with cert-manager in my kubernetes cluster, in order to get certificates signed by let'sencrypt CA to my service application inside my cluster.
1. Create a cert-manager namespace
⟩ kubectl create namespace cert-manager
namespace/cert-manager created
2. I've created the CRDs that helm need to implement the CA and certificates functionality.
⟩ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
[I]
⟩
3. Disable resource validation on the cert-manager namespace
⟩ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
namespace/cert-manager labeled
[I]
4. Add the Jetstack Helm repository and update the local cache
⟩ helm repo add jetstack https://charts.jetstack.io
"jetstack" has been added to your repositories
[I]
~
⟩
⟩ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[I]
5. I've installed cert-manager inside my k8s cluster using helm:
helm install \
--name cert-manager \
--namespace cert-manager \
--version v0.7.0 \
jetstack/cert-manager
6. I've created an ACME Issuer including http challenger provider to obtained by performing challenge validations against an ACME server such as Let’s Encrypt.
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: b.garcia#possibilit.nl
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
Apply in the same namespace (default) in where is located my service application which I want to get the certificates.
⟩ kubectl apply -f 01-lets-encrypt-issuer-staging.yaml
issuer.certmanager.k8s.io/letsencrypt-staging created
⟩ kubectl get issuer --namespace default
NAME AGE
letsencrypt-staging 22s
This have the following description: We can see that the ACME account was registered with the ACME and the Status is True and Ready
⟩ kubectl describe issuer letsencrypt-staging --namespace default
Name: letsencrypt-staging
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Issuer","metadata":{"annotations":{},"name":"letsencrypt-staging","namespace":"default...
API Version: certmanager.k8s.io/v1alpha1
Kind: Issuer
Metadata:
Creation Timestamp: 2019-03-13T10:12:01Z
Generation: 1
Resource Version: 247916
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/issuers/letsencrypt-staging
UID: 7170a66e-4578-11e9-b6d4-2aeecf80bb69
Spec:
Acme:
Email: b.garcia#myemail.com
Http 01:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/8550675
Conditions:
Last Transition Time: 2019-03-13T10:12:02Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
7. I've created the certificate in the same namespace in where the Issuer was created (default) and referencing it:
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: zcrm365-lets-encrypt-staging
#namespace: default
spec:
secretName: zcrm365-lets-encrypt-staging-tls
issuerRef:
name: letsencrypt-staging
commonName: test1kongletsencrypt.possibilit.nl
# http01 challenge
acme:
config:
- http01:
ingressClass: nginx
# ingress: nginx # kong-ingress-controller # nginx
domains:
- test1kongletsencrypt.possibilit.nl
Apply the certificate
⟩ kubectl apply -f 02-certificate-staging.yaml
certificate.certmanager.k8s.io/zcrm365-lets-encrypt-staging created
I execute the kubectl describe certificate zcrm365-lets-encrypt-staging and I can see, the following:
⟩ kubectl describe certificate zcrm365-lets-encrypt-staging
Name: zcrm365-lets-encrypt-staging
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"zcrm365-lets-encrypt-staging","names...
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-13T19:32:25Z
Generation: 1
Resource Version: 321283
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/zcrm365-lets-encrypt-staging
UID: bad7f778-45c6-11e9-b6d4-2aeecf80bb69
Spec:
Acme:
Config:
Domains:
test1kongletsencrypt.possibilit.nl
Http 01:
Ingress Class: nginx
Common Name: test1kongletsencrypt.possibilit.nl
Issuer Ref:
Name: letsencrypt-staging
Secret Name: zcrm365-lets-encrypt-staging-tls
Status:
Conditions:
Last Transition Time: 2019-03-13T19:32:25Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events: <none>
We can see that the Status is False and the certificate issuance is temporary.
This certificate, create a secret named zcrm365-lets-encrypt-staging-tls which have my private key pair tls.crt and tls.key
⟩ kubectl describe secrets zcrm365-lets-encrypt-staging-tls
Name: zcrm365-lets-encrypt-staging-tls
Namespace: default
Labels: certmanager.k8s.io/certificate-name=zcrm365-lets-encrypt-staging
Annotations: certmanager.k8s.io/alt-names: test1kongletsencrypt.possibilit.nl
certmanager.k8s.io/common-name: test1kongletsencrypt.possibilit.nl
certmanager.k8s.io/ip-sans:
certmanager.k8s.io/issuer-kind: Issuer
certmanager.k8s.io/issuer-name: letsencrypt-staging
Type: kubernetes.io/tls
Data
====
ca.crt: 0 bytes
tls.crt: 1029 bytes
tls.key: 1679 bytes
8. Creating the ingress to my service application
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-staging"
certmanager.k8s.io/acme-challenge-type: http01
# certmanager.k8s.io/acme-http01-ingress-class: "true"
# kubernetes.io/tls-acme: true
# this annotation requires additional configuration of the
# ingress-shim (see above). Namely, a default issuer must
# be specified as arguments to the ingress-shim container.
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- backend:
serviceName: zcrm365dev
servicePort: 80
path: /
tls:
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: zcrm365-lets-encrypt-staging-tls
Apply the ingress
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 created
I can see our ingress
⟩ kubectl get ingress -n default
NAME HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-2m6gl test1kongletsencrypt.possibilit.nl 80 3h3m
kong-ingress-zcrm365 test1kongletsencrypt.possibilit.nl 52.166.60.158 80, 443 3h3m
[I]
The detail of my ingress is the following:
⟩ kubectl describe ingress cm-acme-http-solver-2m6gl
Name: cm-acme-http-solver-2m6gl
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/.well-known/acme-challenge/br0Y8eEsuZ5C2fKoeNVL2y03wn1ZHOQwKQCOOkyWabE cm-acme-http-solver-9cwhm:8089 (<none>)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
Events: <none>
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments)
---
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
zcrm365-lets-encrypt-staging-tls terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
certmanager.k8s.io/acme-challenge-type: http01
certmanager.k8s.io/issuer: letsencrypt-staging
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/acme-challenge-type":"http01","certmanager.k8s.io/issuer":"letsencrypt-staging"},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"zcrm365-lets-encrypt-staging-tls"}]}}
Events: <none>
When I perform all this, I can see that my application service is exposed via kong-ingress-zcrm365 ingress, because is reached with my test1kongletsencrypt.possibilit.nl domain.
But as you can see, I don't get the https certificate to my service. The https is an insecure connection
I've checked the logs of my cert-manager pod and I have the following:
kubectl logs pod/cert-manager-6f68b58796-hlszm -n cert-manager
I0313 19:40:39.254765 1 controller.go:206] challenges controller: syncing item 'default/zcrm365-lets-encrypt-staging-298918015-0'
I0313 19:40:39.254869 1 logger.go:103] Calling Discover
I0313 19:40:39.257720 1 pod.go:89] Found pod "default/cm-acme-http-solver-s6s2n" with acme-order-url annotation set to that of Certificate "default/zcrm365-lets-encrypt-staging-298918015-0"but it is not owned by the Certificate resource, so skipping it.
I0313 19:40:39.257735 1 pod.go:64] No existing HTTP01 challenge solver pod found for Certificate "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.286823 1 service.go:51] No existing HTTP01 challenge solver service found for Certificate "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.347204 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=919604798
I0313 19:40:39.347437 1 ingress.go:98] No existing HTTP01 challenge solver ingress found for Challenge "default/zcrm365-lets-encrypt-staging-298918015-0". One will be created.
I0313 19:40:39.362118 1 controller.go:178] ingress-shim controller: syncing item 'default/cm-acme-http-solver-2m6gl'
I0313 19:40:39.362257 1 sync.go:64] Not syncing ingress default/cm-acme-http-solver-2m6gl as it does not contain necessary annotations
I0313 19:40:39.362958 1 controller.go:184] ingress-shim controller: Finished processing work item "default/cm-acme-http-solver-2m6gl"
I0313 19:40:39.362702 1 pod.go:89] Found pod "default/cm-acme-http-solver-s6s2n" with acme-order-url annotation set to that of Certificate "default/zcrm365-lets-encrypt-staging-298918015-0"but it is not owned by the Certificate resource, so skipping it.
I0313 19:40:39.363270 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=919604798
I0313 19:40:46.279269 1 controller.go:206] challenges controller: syncing item 'default/zcrm365-lets-encrypt-staging-tls-1561329142-0'
E0313 19:40:46.279324 1 controller.go:230] ch 'default/zcrm365-lets-encrypt-staging-tls-1561329142-0' in work queue no longer exists
I0313 19:40:46.279332 1 controller.go:212] challenges controller: Finished processing work item "default/zcrm365-lets-encrypt-staging-tls-1561329142-0"
[I]
I think that the http challenge process is not performed, because let'sencrypt not trust in that I am the owner of the https://test1kongletsencrypt.possibilit.nl/index.html domain.
How to can I solve this in order to get TLS with letsencrypt?
Is possible that do I need to use ingress-shim functionality in my helm cert-manager and/or WebhookValidation ?
IMPORTANT UPDATE
I am currently using kong-ingress-controller like ingress to my deployment.
I've installed of this way in this gist.
But I am not sure of how to can I integrate my kong-ingress-controller to work with cert-manager when I am creating my zcrm365-lets-encrypt-staging certificate signing request.
This is my current view of my kong resources
⟩ kubectl get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-7f66b99bb5-ldp4v 1/1 Running 0 2d16h
pod/kong-ingress-controller-667b4748d4-sptxm 1/2 Running 782 2d16h
pod/kong-migrations-h6qt2 0/1 Completed 0 2d16h
pod/konga-85b66cffff-6k6lt 1/1 Running 0 41h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-ingress-controller NodePort 10.0.48.131 <none> 8001:32257/TCP 2d16h
service/kong-proxy LoadBalancer 10.0.153.8 52.166.60.158 80:31577/TCP,443:32323/TCP 2d16h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/kong 1 1 1 1 2d16h
deployment.apps/kong-ingress-controller 1 1 1 0 2d16h
deployment.apps/konga 1 1 1 1 41h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-7f66b99bb5 1 1 1 2d16h
replicaset.apps/kong-ingress-controller-667b4748d4 1 1 0 2d16h
replicaset.apps/konga-85b66cffff 1 1 1 41h
NAME COMPLETIONS DURATION AGE
job.batch/kong-migrations 1/1 86s 2d16h
The service service/kong-proxy provide me the external or public IP and when I create the kong-ingress-zcrm365, this ingress take that external IP address provided by kong-proxy. But of course in the ingress I am indicating that use nginx and not kong-ingress-controller.
And by the way I don't have installed nginx ingress controller, I am a little confuse here.
If someone can point me in the correct address, their help will be highly appreciated.
First check if using nginx ingress then nginx ingress controller is tunning
you are right track but have to added the ingress controller for ingress? if you are using the nginx ingress you have to add the controller in the K8s cluster.
your way and approach is perfect cert-manager and everything. here sharing the link of one tutorial check it out it is from digital ocean :
this link is same approch as you following just compare steps
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
if any issue drop comment for more