cert-mananger configuration on GKE with clouddns - kubernetes

So I am looking to set up cert-manager on GKE using google clouddns. It seems like a lot of the older questions on SO that have been asked are using http01 instead of dns01. I want to make sure everything is correct so I don't get rate limited.
here is my issuer.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: engineering#company.com
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- dns01:
clouddns:
project: MY-GCP_PROJECT
# This is the secret used to access the service account
serviceAccountSecretRef:
name: clouddns-dns01-solver-svc-acct
key: key.json
here is my certificate.yaml
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: my-website
namespace: default
spec:
secretName: my-website-tls
issuerRef:
# The issuer created previously
name: letsencrypt-staging
dnsNames:
- my.website.com
I ran these commands to get everything configured:
kubectx my-cluster
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.15.1/cert-manager.yaml
kubectl get pods --namespace cert-manager
gcloud iam service-accounts create dns01-solver --display-name "dns01-solver"
gcloud projects add-iam-policy-binding $PROJECT_ID --member serviceAccount:dns01-solver#$PROJECT_ID.iam.gserviceaccount.com --role roles/dns.admin
gcloud iam service-accounts keys create key.json --iam-account dns01-solver#$PROJECT_ID.iam.gserviceaccount.com
kubectl create secret generic clouddns-dns01-solver-svc-acct --from-file=key.json
kubectl apply -f issuer.yaml
kubectl apply -f certificate.yaml
here is the output from kubectl describe certificaterequests
Name: my-certificaterequests
Namespace: default
Labels: <none>
Annotations: cert-manager.io/certificate-name: my-website
cert-manager.io/private-key-secret-name: my-website-tls
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cert-manager.io/v1alpha2","kind":"Certificate","metadata":{"annotations":{},"name":"my-cluster","namespace":"default...
API Version: cert-manager.io/v1alpha3
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2020-06-28T00:05:55Z
Generation: 1
Owner References:
API Version: cert-manager.io/v1alpha2
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: my-cluster
UID: 81efe2fd-5f58-4c84-ba25-dd9bc63b032a
Resource Version: 192470614
Self Link: /apis/cert-manager.io/v1alpha3/namespaces/default/certificaterequests/my-certificaterequests
UID: 8a0c3e2d-c48e-4cda-9c70-b8dcfe94f14c
Spec:
Csr: ...
Issuer Ref:
Name: letsencrypt-staging
Status:
Certificate: ...
Conditions:
Last Transition Time: 2020-06-28T00:07:51Z
Message: Certificate fetched from issuer successfully
Reason: Issued
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal OrderCreated 16m cert-manager Created Order resource default/my-certificaterequests-484284207
Normal CertificateIssued 14m cert-manager Certificate fetched from issuer successfully
I see the secret kubectl get secret my-website-tls
NAME TYPE DATA AGE
my-website-tls kubernetes.io/tls 3 18m
Does that means everything worked and I should try it in prod? What worries me is that I didn't see any DNS records change in my cloud console.
In addition I wanted to confirm:
How would I change the certificate to be for a wildcard *.company.com?
If in fact I am ready for prod and will get the cert, I just need to updated the secret name in my ingress deployment to redeploy?
Any insight would be greatly appreciated. Thanks

I answered you on Slack already. And you would change the name by changing the value in the dnsNames section of the Certificate or the spec.tls.*.hosts if using ingress-shim, you just include the wildcard name exactly as you showed it.

Related

How to get token from service account?

I'm new to Kubernetes. I need to get token from service account which was created by me. I used kubectl get secrets command and I got "No resources found in default namespace." as return. Then I used kubectl describe serviceaccount deploy-bot-account command to check my service account. It returns me as below.
Name: deploy-bot-account
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
How can I fix this issue?
When service account is crated, k8s automatically creates a secrets and maps the same to sa. The secret contains ca.crt, token and namespace that are required for authN against API server.
refer the following commands
# kubectl create serviceaccount sa1
# kubectl get serviceaccount sa1 -oyaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa1
namespace: default
secrets:
- name: sa1-token-l2hgs
You can retrieve the token from the secret mapped to the service account as shown below
# kubectl get secret sa1-token-l2hgs -oyaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRVeE1Wb1hEVE13TURReU1ERXhNVFV4TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT2lCCk5RTVFPU0Rvdm5IcHQ2MjhkMDZsZ1FJRmpWbGhBb3Q2Uk1TdFFFQ3c3bFdLRnNPUkY4aU1JUDkrdjlJeHFBUEkKNWMrTXkvamNuRWJzMTlUaWEz-NnA0L0pBT25wNm1aSVgrUG1tYU9hS3gzcm13bFZDZHNVQURsdWJHdENhWVNpMQpGMmpBUXRCMkZrTUN2amRqNUdnNnhCTXMrcXU2eDNLQmhKNzl3MEFxNzZFVTBoTkcvS2pCOEd5aVk4b3ZKNStzCmI2LzcwYU53TE54TVU3UjZhV1d2OVJhUmdXYlVPY2RxcWk4WnZtcTZzWGZFTEZqSUZ5SS9GeHd6SWVBalNwRjEKc0xsM1dHVXZONkxhNThUdFhrNVFhVmZKc1JDUGF0ZjZVRzRwRVJDQlBZdUx-lMzl4bW1LVk95TEg5ditsZkVjVApVcng5Qk9LYmQ4VUZrbXdpVSs4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKMkhUMVFvbkswWnFJa0kwUUJDcUJUblRoT0cKeE56ZURSalVSMEpRZTFLT2N1eStZMWhwTVpYOTFIT3NjYTk0RlNiMkhOZy9MVGkwdnB1bWFGT2d1SE9ncndPOQpIVXZVRFZPTDlFazF5SElLUzBCRHdrWDR5WElMajZCOHB1Wm1FTkZlQ0cyQ1I5anpBVzY5ei9CalVYclFGVSt3ClE2OE9YSEUybzFJK3VoNzBiNzhvclRaaC9hVUhybVAycXllakM2dUREMEt1QzlZcGRjNmVna2U3SkdXazJKb3oKYm5OV0NHWklEUjF1VFBiRksxalN5dTlVT1MyZ1dzQ1BQZS8vZ2JqUURmUmpyTjJldmt2RWpBQWF0OEpsd1FDeApnc3ZlTEtCaTRDZzlPZDJEdWphVmxtR2YwUVpXR1FmMFZGaEFlMzIxWE5hajJNL2lhUXhzT3FwZzJ2Zz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
namespace: ZGVmYXVsdA==
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaW-FJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSmtaV1poZFd4MElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WldOeVpYUXVibUZ0WlNJNkluTmhNUzEwYjJ0bGJpMXNNbWhuY3lJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExtNWhiV1VpT2lKellURWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSXhaRFUyWW1Vd09DMDRORGt4TFRFeFpXRXRPV0ppWWkwd01qUXlZV014TVRBd01UVWlMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOT-FiblE2WkdWbVlYVnNkRHB6WVRFaWZRLmFtdGFORHZUNE9DUlJjZVNpTUE0WjhxaExIeTVOMUlfSG12cTBPWDdvV3RVNzdEWl9wMnVTVm13Wnlqdm1DVFB0T01acUhKZ29BX0puYUphWmlIU3IyaGh3Y2pTN2VPX3dhMF8tamk0ZXFfa0wxVzVNMDVFSG1YZFlTNzdib-DAtZ29jTldxT2RORVhpX1VBRWZLR0RwMU1LeFpFdlBjamRkdDRGWVlBSmJ5LWRqdXNhRjhfTkJEclhJVUNnTzNLUUlMeHZtZjZPY2VDeXYwR3l4ajR4SWRPRTRSSzZabzlzSW5qY0lWTmRvVm85Y3o5UzlvaGExNXdrMWl2VDgwRnBqU3dnUUQ0OTFqdEljdFppUkJBQzIxZkhYMU5scENaQTdIb3Zvck5Yem9maGpmUG03V0xRUUYyQjc4ZkktUEhqMHM2RnNpMmI0NUpzZzFJTTdXWU50UQ==
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: sa1
kubernetes.io/service-account.uid: 1d56be08-8491-11ea-9bbb-0242ac110015
name: sa1-token-l2hgs
namespace: default
type: kubernetes.io/service-account-token

Restoring k8s service account tokens

I'd like to restore a kubernetes service account token from a backup (which is actually just an export of the corresponding secret):
apiVersion: v1
kind: Secret
metadata:
name: my-service-account-token-lqrvp
annotations:
kubernetes.io/service-account.name: my-service-account
type: kubernetes.io/service-account-token
data:
token: bXktc2ltcGxlLXRva2VuCg==
The secret has been applied successfully and was added to the service account:
# kubectl apply -f my-service-account.yaml
secret/my-service-account-token-lqrvp created
# kubectl describe sa my-service-account
Name: my-service-account
Namespace: my-namespace
Labels: <none>
Annotations: kubernetes.io/service-account.name: my-service-account
Image pull secrets: my-service-account-dockercfg-lv9hp
Mountable secrets: my-service-account-token-lv9hp
Tokens: my-service-account-token-lqrvp
Events: <none>
Unfortunately, everytime I try to access the api using the token, I always get the error "The token provided is invalid or expired":
# kubectl login https://api.my-k8s-cluster.mydomain.com:6443 --token=my-simple-token
error: The token provided is invalid or expired
I know that the token is usually automatically generated by the controller-manager, but is restoring a token supported by kubernetes?

Ingress and cert manager are not creating certificate

I am trying to deploy ingress-routes in Kubernetes following these guides:
https://cert-manager.io/docs/tutorials/acme/ingress/
https://learn.microsoft.com/en-us/azure/aks/ingress-static-ip
I have deployed a cluster-issuer:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: <Myemail>
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
Then I have deployed ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: airflow-ingress
namespace: airflow6
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencryp
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- <MYhost>
secretName: tls-secret1
rules:
- host: <MYhost>
http:
paths:
- path: /
backend:
serviceName: airflow-web
servicePort: 8080
Then if I try to get the certificate:
kubectl describe certificate tls-secret1 --namespace airflow6
Error from server (NotFound): certificates.cert-manager.io "tls-secret1" not found
I have tried to deploy my own certificate:
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: tls-secret1
namespace: airflow6
spec:
secretName: tls-secret1
dnsNames:
- <MYhost>
issuerRef:
name: letsencrypt
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: ClusterIssuer
group: cert-manager.io
Then run the same command:
kubectl describe certificate tls-secret1 --namespace airflow6
Name: tls-secret1
Namespace: airflow6
Labels: <none>
Annotations: API Version: cert-manager.io/v1beta1
Kind: Certificate
Metadata:
Creation Timestamp: 2020-10-12T10:50:25Z
Generation: 1
Resource Version: 9408916
Self Link: /apis/cert-manager.io/v1beta1/namespaces/airflow6/certificates/quickstart-example-tls
UID: 5c4f06e2-bb61-4eed-8999-58540d4055ce
Spec:
Dns Names:
<Myhost>
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt
Secret Name: tls-secret1
Status:
Conditions:
Last Transition Time: 2020-10-12T10:50:25Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Last Transition Time: 2020-10-12T10:50:25Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Next Private Key Secret Name: tls-secret1
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 3m8s cert-manager Issuing certificate as Secret does not exist
Normal Requested 3m8s cert-manager Created new CertificateRequest resource "quickstart-example-tls-hl7vk"
Normal Requested <invalid> cert-manager Created new CertificateRequest resource "quickstart-example-tls-vqmbh"
Normal Generated <invalid> (x3 over 3m8s) cert-manager Stored new private key in temporary Secret resource "quickstart-example-tls-fgvn6"
Normal Requested <invalid> cert-manager Created new CertificateRequest resource "quickstart-example-tls-5gg9l"
I don't know if I need to create a secret like this:
apiVersion: v1
kind: Secret
name: example-tls
namespace: foo
data:
tls.crt: <base64 encoded cert>
tls.key: <base64 encoded key>
type: kubernetes.io/tls
But I really don't know what I have to put in tls.crt and tls.key.
In all the guides I have read I saw that when the ingress-routes is deployed automatically a certificate is created but for me is not working, what I am going wrong?
no you are not supposed to create the TLS secret on your own, it's like when you put the secret name in the ingress rule's tls section, then while doing the DNS verification, the secret will be created by issuer itself for the respective namespace in which the ingress rule has been created.
To cross-check on configs created or to create new one, you can refer this
Then you can follow this stack overflow post, it will help you likely

Error from server: conversion webhook for cert-manager.io/v1alpha2 for cert-manager ClusterIssuer

When I try configuring TLS Let's Encrypt certificates for my cluster application with a NGINX Ingress controller and cert-manager, something goes wrong with the ClusterIssuer.
My ClusterIssuer is defined as followed:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: user#example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx
When I check out the clusterissuer via kubectl, it says that the ClusterIssuer is READY.
$ kubectl get clusterissuer --namespace mynamespace
Response:
NAME READY AGE
letsencrypt-prod True 13s
But when I describe the ClusterIssuer I get an error.
$ kubectl describe clusterissuer letsencrypt-prod --namespace mynamespace
Response:
Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=ClusterIssuer failed: Post https://cert-manager-webhook.cert-manager.svc:443/convert?timeout=30s: service "cert-manager-webhook" not found
I installed cert-manager with Helm 3 with manually adding the CRDs.
How to solve this?
The cert-manager chart does not accept different namespacing when the CRDs are applied manually to your cluster. Instead of applying them manually first, install the CRDs as part of the Helm 3 release.
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update
$ helm install \
cert-manager jetstack/cert-manager \
--namespace mynamespace \
--version v0.15.1 \
--set installCRDs=true
I solved this issue by adding namespace: cert-manager under metadata
It would look something like this:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
email: user#example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
solvers:
- http01:
ingress:
class: nginx

cert-manager after major update stopped working

The issue started after a major update of cert-manager from 0.6.0 to 0.11.0 version.
The update has been processed via config backup, cert-manager remove, helm update, then cert-manager install and backup restore. No config changes during the update.
Pod and service are up, but no certs issued after update.
There are logs for cert-manager service:
E0114 04:34:18.126497 1 sync.go:57] cert-manager/controller/ingress-shim "msg"="failed to determine issuer to be used for ingress resource" "error"="failed to determine issuer name to be used for ingress resource" "resource_kind"="Ingress" "resource_name"="ucb-sandbox-ingress" "resource_namespace"="cloud-engagement-sandbox"
I0114 04:34:18.126791 1 controller.go:135] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="cloud-engagement-sandbox/ucb-sandbox-ingress"
I0114 04:34:18.127064 1 controller.go:129] cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="cloud-engagement-sandbox/ucf-sandbox-ingress"
E0114 04:34:18.127294 1 sync.go:57] cert-manager/controller/ingress-shim "msg"="failed to determine issuer to be used for ingress resource" "error"="failed to determine issuer name to be used for ingress resource" "resource_kind"="Ingress" "resource_name"="ucf-sandbox-ingress" "resource_namespace"="cloud-engagement-sandbox"
I0114 04:34:18.127534 1 controller.go:135] cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="cloud-engagement-sandbox/ucf-sandbox-ingress"
My ClusterIssuer yaml:
apiVersion: certmanager.k8s.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [removed]
privateKeySecretRef:
name: letsencrypt-prod
http01: {}
And describe ClusterIssuer letsencrypt-prod
ClusterIssuer letsencrypt-prod
Name: letsencrypt-prod
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"creationTimestamp":"2019-02-17T22:42:55Z"...
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2019-02-17T22:42:55Z
Generation: 1
Resource Version: 53383155
Self Link: /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-prod
UID: 5e0c332f-3305-11e9-93cb-069443f5754c
Spec:
Acme:
Email: [removed]
Http 01:
Private Key Secret Ref:
Key:
Name: letsencrypt-prod
Server: https://acme-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-v02.api.letsencrypt.org/acme/acct/51694394
Conditions:
Last Transition Time: 2019-02-17T22:42:57Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
The apiVersion has been changed from certmanager.k8s.io/v1alpha1 to cert-manager.io/v1alpha2. But You still have CRD with old apiVersion which you need to remove.
Follow below steps to upgrade cert manager paying attention to
step 3 and 4.
1.Back up existing cert-manager resources, as per the backup and restore guide.
2.Uninstall cert-manager
3.Ensure the old cert-manager CRD resources have also been deleted: kubectl get crd | grep certmanager.k8s.io
4.Update the apiVersion on all your backed up resources from certmanager.k8s.io/v1alpha1 to cert-manager.io/v1alpha2.
5.Re-install cert-manager from scratch according to the installation guide
Here is the official upgrade guide
It's sorted. The culprit was in 1) incomplete cert-manager install.
2) Also I've modified backup and replaced ALL certmanager.k8s.io with cert-manager.io and v1alpha1 with v1alpha2.
3) manually deleted other related to certmanager.k8s.io CRDs
Thanks for reply.
I removed old CRD after helm purge cert-manager and installed a fresh version 0.12 using manifests.
My current CRD below:
kubectl get crd
NAME CREATED AT
certificaterequests.cert-manager.io 2019-11-01T01:37:03Z
certificates.cert-manager.io 2019-11-01T01:37:03Z
challenges.acme.cert-manager.io 2019-11-01T01:37:03Z
challenges.certmanager.k8s.io 2020-01-15T05:31:48Z
clusterissuers.cert-manager.io 2019-11-01T01:37:03Z
healthstates.azmon.container.insights 2019-08-29T10:13:59Z
issuers.cert-manager.io 2019-11-01T01:37:03Z
orders.acme.cert-manager.io 2019-11-01T01:37:03Z
orders.certmanager.k8s.io 2020-01-15T05:31:49Z
And updated description of ClusterIssuer
kubectl describe ClusterIssuer letsencrypt-prod
Name: letsencrypt-prod
Namespace:
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1alpha2
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2020-01-15T05:38:32Z
Generation: 1
Resource Version: 71299934
Self Link: /apis/cert-manager.io/v1alpha2/clusterissuers/letsencrypt-prod
UID: 4465c9ce-3759-11ea-be9c-0a7022c023e8
Spec:
Acme:
Email:
Private Key Secret Ref:
Name: letsencrypt-prod
Server: https://acme-v02.api.letsencrypt.org/directory
Solvers:
Http 01:
Ingress:
Class: nginx
Selector:
Events: <none>
I don't have ingress under cert-manager namespace. Also, my backup includes old certificates, CRD, Issuers, Certs and Certs requests etc but I don't know how to restore just what needed.