No matches for kind ClusterIssuer on a Digital Ocean Kubernetes Cluster - kubernetes

I have been following this guide to create an nginx-ingress which works fine.
Next I want to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server but get this error.
kubectl create -f staging_issuer.yaml
error: unable to recognize "staging_issuer.yaml": no matches for kind
"ClusterIssuer" in version "certmanager.k8s.io/v1alpha1"
I have searched for solutions but can't find anything that works for me or that I can understand. What I found is mostly bug reports.
Here is my yaml file I used to create the ClusterIssuer.
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: your_email_address_here
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}

Try following this link, the cert-manager LetsEncrypt has notified that it will be blocking all traffic for versions < 0.8.0, Hence you can use Jetstack's installation steps, and then you can follow
this link to get TLS certificates creation, It has worked for me.
Let me know, if you face issues

I fixed the problem by running helm del --purge cert-manager
and then
helm install --name cert-manager --namespace kube-system stable/cert-manager --set createCustomResource=true

Sometimes is the space character used in the .yaml file. Ensure that you are not using tabs instead of spaces. You can delete the line in the "Kind" (or any that is showing error) and write again using space bar to separate, not tabs.

Related

cert-manager fails to create certificate in K8s cluster with Istio and LetsEncrypt

I have a Kubernetes cluster with Istio installed and I want to secure the gateway with TLS using cert-manager.
So, I deployed a cert-manager, issuer and certificate as per this tutorial: https://github.com/tetratelabs/istio-weekly/blob/main/istio-weekly/003/demo.md
(to a cluster reachable via my domain)
But, the TLS secret does not get created - only what seems to be a temporary one with a random string appended: my-domain-com-5p8rd
The cert-manager Pod has these 2 lines spammed in the logs:
W0208 19:30:20.548725 1 reflector.go:424] k8s.io/client-go#v0.26.0/tools/cache/reflector.go:169: failed to list *v1.Challenge: the server could not find the requested resource (get challenges.acme.cert-manager.io)
E0208 19:30:20.548785 1 reflector.go:140] k8s.io/client-go#v0.26.0/tools/cache/reflector.go:169: Failed to watch *v1.Challenge: failed to list *v1.Challenge: the server could not find the requested resource (get challenges.acme.cert-manager.io)
Now, I don't understand why it's trying to reach "challenges.acme.cert-manager.io", because my Issuer resource has spec.acme.server: https://acme-v02.api.letsencrypt.org/directory
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-prod
namespace: istio-system
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: removed#my.domain.com
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector: {}
http01:
ingress:
class: istio
then
kubectl get certificate -A
shows the certificate READY = False
kubectl describe certificaterequest -A
returns
Status:
Conditions:
Last Transition Time: 2023-02-08T18:09:55Z
Message: Certificate request has been approved by cert-manager.io
Reason: cert-manager.io
Status: True
Type: Approved
Last Transition Time: 2023-02-08T18:09:55Z
Message: Waiting on certificate issuance from order istio-system/my--domain-com-jl6gm-3167624428: "pending"
Reason: Pending
Status: False
Type: Ready
Events: <none>
notes:
The cluster does not have a Load Balancer, so I expose the
ingress-gateway with nodePort(s).
accessing the https://my.domain.com/.well-known/acme-challenge/
cluster is installed on Kubeadm
cluster networking is done via Calico
http01 challenge
Thanks.
Figured this out.
Turns out, the 'get challenges.acme.cert-manager.io' is not a HTTP get, but rather a resource GET within K8s cluster.
There is 'challenges.acme.cert-manager.io' CustomResourceDefinition in cert-manager.yml
Running this command
kubectl get crd -A
returns a list of all CustomResourceDefinitions, but this one was missing.
I copied it out from cert-manager.yml to separate file and applied it manually - suddenly the challenge got created and so did the secret.
Why it didn't get applied with everything else in cert-manager.yml is beyond me.

ApiVersions missing in updated Cert-manager. Cert-manager Conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed

When I am trying to get cluster issuers in my k8s cluster, I am receiving this error message. Can someone help me troubleshoot this?
kubectl get cluster issuers
Output: Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed: an error on the server ("") has prevented the request from succeeding
Here's my clusterissuer.yml file:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-dev-certificate
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: my.email#org.co
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-dev-certificate
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
Here's the kubernetes version:
kubectl version --short
Output: Client Version: v1.23.1
Server Version: v1.22.12-gke.2300
Initially, I thought the problem was when I was trying to get ClusterIssuer or Certificates in Cert-Manager. But later after more troubleshooting, I found that after Cert-Manager 1.5.5, we cannot directly update to Cert-Manager 1.9.1 since some CRD resource gets missed out in between hence causing issue in getting resources like Certificates, ClusterIssuer which are defined as CRDs in Cert-Manager package.
Solution: I downgraded my cert-manager back to 1.5.5 then followed the steps from Cert-Manager's blog on making CRDs ready for Cert-Manager update. Then, updated Cert-Manager back to 1.9.1. And everything was working fine. Here're the steps that I followed:
Downgrade back to Cert-Manager 1.5.5:
kaf https://github.com/cert-manager/cert-manager/releases/download/v1.5.5/cert-manager.yaml
Install cmctl for cert-manager:
curl -sSL -o cmctl.tar.gz https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cmctl-linux-amd64.tar.gz
tar xzf cmctl.tar.gz
mv cmctl /usr/local/bin
cmctl upgrade --help
Upgrade api-versions before cert-manager updates:
cmctl upgrade migrate-api-version
Now, its safe to upgrade cert-manager to 1.7.1
4. Upgrade to Cert-Manager 1.7.1 or 1.9.1:
kaf https://github.com/cert-manager/cert-manager/releases/download/v1.7.2/cert-manager.yaml
kaf https://github.com/cert-manager/cert-manager/releases/download/v1.7.2/cert-manager.yaml
Verify by fetching the CRD resources:
kubectl get clusterissuer
kubectl get certificates

How to solve via Rancher a Kubernetes Ingress Controller Fake Certificate error

I installed Rancher 2.6 on top of a kubernetes cluster. As cert-manager version I used 1.7.1.
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.7.1 --set installCRDs=true --create-namespace
helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=MYDOMAIN.org \
--set bootstrapPassword=MYPASSWORD \
--set ingress.tls.source=letsEncrypt \ //<--- I use letsEncrypt
--set letsEncrypt.email=cert#mydomain.org \
--set letsEncrypt.ingress.class=nginx
After the installation was done, Rancher was successfully deployed on https:\mydomain.org.
LetsEncrypt SSL worked here fine. With Rancher I created a new RKE2 Cluster for my Apps.
So, I created a new Deployment for testing
"rancher/hello-world:latest"
3x Replicas
Direct call of the nodeport ip adress with port, worked. http://XXXXXX:32599/
At this point I want to use a https subdomain hello.mydomain.org.
After study of documentation my approach was to create a new Ingress. I did it like you see on the following picture.
After creation of a new Ingress, I checked the section Ingresses of my hello world deployment. That new Ingress is now available there.
My expectation was that now I can go to **https://**hello.mydomain.org. But https doesn't work here, instead I got:
NET::ERR_CERT_AUTHORITY_INVALID
Subject: Kubernetes Ingress Controller Fake Certificate
Issuer: Kubernetes Ingress Controller Fake Certificate
Expires on: 03.09.2023
Current date: 03.09.2022
Where did I make a mistake? How to use LetsEncrypt for my deployments?
The fake certificate usually implies that the ingress controller is serving a default backend instead of what you expect it to. While a particular Ingress resource might be served over http as expected, the controller doesn't consider it servable over https. Most likely explanation is that the certificate is missing and ingress host isn't configured for https. When you installed rancher you only configured Rancher's own ingress. You need to setup certificates for each Ingress resource separately.
You didn't mention which ingress-controller you are using. With LE or other ACME based certificate issuers you'll usually need a Certificate Controller to manage certificate generation and renewal. I'd recommend cert-manager. There is an excellent tutorial for setting up LE, cert-manager and nginx-ingress-controller. If you're using Traefik, it is capable of generating LE certificates by itself, but the support is only partial in kubernetes environments (ie. no high availability), so your best bet is to use cert-manager even with that.
Even if or once you have set them up, cert-manager doesn't automatically issue certificates for every Ingress but only to those it is requested to. You need annotations for that.
With cert-manager, once you have set up the Issuer/ClusterIssuer and annotation, your ingress resource should look something like this (you can check the YAML from rancher):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
name: hello-ingress
namespace: hello-ns
spec:
rules:
- host: hello.example.com
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- hello.example.com
secretName: hello-letsencrypt-cert
You might need to edit YAML directly and add spec.tls.secretName. If all is well, once you apply metadata.annotations and have set up spec.tls.hosts and spec.tls.secretName, the verification should happen soon and the ingress address should change to https://hello.example.com.
As a side note, I've experienced this issue also when the Ingress is behind a reverse proxy, such as HAproxy, and that reverse proxy (or Ingress) is not properly set up to use proxy protocol. You don't mention using one, but I'll write it just for the record.
If these steps don't solve your problem, you should check kubectl describe on the ingress and kubectl logs on the nginx-controller pods and see if anything stands out.
EDIT: I jumped to a conclusion, so I restructured this answer to also note the possibly of missing certificate manager altogether.

cert-manager: let's encrypt refuses ACME account

I followed the cert-manager tutorial to enable tls in my k3s cluster. So I modified the letsencrypt-staging issuer file to look like this:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: mail#example.com
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: traefik
but when I deploy it, I get the error Failed to verify ACME account: Get "https://acme-staging-v02.api.letsencrypt.org/directory": read tcp 10.42.0.96:45732->172.65.46.172:443: read: connection reset by peer. But thats only with the staging clusterIssuer. The production example from te tutorial works flawlessly. I resacherd this error and it seems to be somthing with the kubernetes dns but I don't know how to test the dns or any other way to figure this error out.
Tested the kubernetes DNS and it is up and running, so it must be an error with cert-manager,especially because the prod certificates status says `Ready=True
So it seems like I ran into a let's encrypt limit. After waiting for a day, the certificate now works

Cert-manager order is in invalid state

I’m migrating from a GitLab managed Kubernetes cluster to a self managed cluster. In this self managed cluster need to install nginx-ingress and cert-manager. I have already managed to do the same for a cluster used for review environments. I use the latest Helm3 RC to managed this, so I won’t need Tiller.
So far, I ran these commands:
# Add Helm repos locally
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add jetstack https://charts.jetstack.io
# Create namespaces
kubectl create namespace managed
kubectl create namespace production
# Create cert-manager crds
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml
# Install Ingress
helm install ingress stable/nginx-ingress --namespace managed --version 0.26.1
# Install cert-manager with a cluster issuer
kubectl apply -f config/production/cluster-issuer.yaml
helm install cert-manager jetstack/cert-manager --namespace managed --version v0.11.0
This is my cluster-issuer.yaml:
# Based on https://docs.cert-manager.io/en/latest/reference/issuers.html#issuers
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: XXX # This is an actual email address in the real resource
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- selector: {}
http01:
ingress:
class: nginx
I installed my own Helm chart named docs. All resources from the Helm chart are installed as expected. Using cURL, I can fetch the page over HTTP. Google Chrome redirects me to an HTTPS page with an invalid certificate though.
The additional following resources have been created:
$ kubectl get secrets
NAME TYPE DATA AGE
docs-tls kubernetes.io/tls 3 18m
$ kubectl get certificaterequests.cert-manager.io
NAME READY AGE
docs-tls-867256354 False 17m
$ kubectl get certificates.cert-manager.io
NAME READY SECRET AGE
docs-tls False docs-tls 18m
$ kubectl get orders.acme.cert-manager.io
NAME STATE AGE
docs-tls-867256354-3424941167 invalid 18m
It appears everything is blocked by the cert-manager order in an invalid state. Why could it be invalid? And how do I fix this?
It turns out that in addition to a correct DNS A record for #, there were some AAAA records that pointed to an IPv6 address I don’t know. Removing those records and redeploying resolved the issue for me.