Kuberenetes cert-manager and nginx - kubernetes

I am attempting to setup jetstack/cert-manager on kubernetes. This will provide the certificates for multiple of my subdomains. It has worked great until I needed to create an nginx.conf file.
What happens now is that all the requests from cert-manager to http://www.redacted.com/.well-know/challenge/xx are handled by my application pod, rather than the cert-manager pods.
Does anyone know how I can keep an nginx config file, but let all .well-known requests be handled by the cert-manager pods? It seems that if I could choose the order of the ingresses, I could set the priority on my application ingress to last, so that the automatically created cert-manager ingresses get run first.
Many thanks!

Try adding a service for your cert manager pods and then create and ingress resource with hostPath as /.well-known. This will take all your requests on the url you mentioned to the service you will create for the pods.

So I figured this out and it of course wasn't an issue with cert-manager!
I had my root domain e.g. <redacted>.com automatically redirecting to www.<redacted>.com, but was also trying to generate a certificate at <redacted>.com as well as for the subdomains. Cert-manager failed because it couldn't reach the .well-known/acme-challenge of the root domain as it was being redirected.
The way I plan on solving this is generating the root certificate through DNS not HTTP, and that way the certificate will still be valid for <redacted>.com and I can still forward both http and https connections to the www subdomain.

Related

Stop TLS certificates to be automatically recreated by cert-manager

I was investigating certain things things about cert-manager.
TLS certificates are automatically recreated by cert-manager.
I need to somehow deregister a domain / certificate from being regenerated. I guess I would need to tell cert-manager not to take care about a given domain anymore.
I do not have any clue how to do that right now. Can someone help?
cert-manager is an application implemented using the operator pattern.
In one sentence, it watches for a Custom Resource (CR for short) named Certificate in the Kubernetes API and it creates and updates Secrets resources to store certificate data.
If you delete the Secret resource but don't delete the Certificate CR, cert-manager will recreate the secret for you.
The right way of "deregister a domain" or to better say it "make cert-manager not generate a certificate for a domain any more" is to delete the Certificate CR related to your domain.
To get a list of all the Certificate CRs in your cluster you can use kubectl
kubectl get certificate -A
When you found the Certificate related to the domain you want to delete, simply delete it
kubectl -n <namespace> delete certificate <certificate name>
Once you deleted the certificate CR, you might also want to delete the Secret containing the TLS cert one more time. This time cert-manager will not recreate it.

Problems when upgrade kube-lego to cert-manager

I use kube-lego in 0.1.5-a9592932 version, but this service are deprecated and now the time has come for migrate for cert-manger. When testing migrate I lost the secret "kube-lego-account", but I need this! It's possible forcing generate the secret? I restarted the kube-lego and check the logs and found this:
Attempting to create new secret" context=secret name=kube-lego-account namespace=default
But the secret not was created. How can I do to solve this issue?
let's take an another path, Letsencrypt official docs say that they won't be supporting any longer for below 0.8 versions, so I recommend you to install cert-manager provided by Jetstack, that you can find here, to install the helm chart for it.
The follow this stackoverflow post, for configurations, note that if the api version mentioned in that post doesn't support in case of cluster issuer, then rather use
apiVersion: cert-manager.io/v1alpha2
Note that , the tls secret name mentioned in the certificate will be auto-generated by cert-manager, and it automatically starts an acme-challenge to validate the domain, once you patch that secret name to the TLS in your ingress rule.
It shall solve the issue and the certificate's status will change to ready after the domain verification

GOCD agent registration with kubernetes

I want register kubernetes-elastic-agents with gocd-server. In the doc https://github.com/gocd/kubernetes-elastic-agents/blob/master/install.md
I need kubernetes security token and cluster ca certificate. My Kubernetes is running. How do I create a security token? Where can I find the cluster ca cert?
Jake
There are two answers:
The first is that it's very weird that one would need to manually input those things since they live in a well-known location on disk of any Pod (that isn't excluded via the automountServiceAccountToken field) as described in Accessing the API from a Pod
The second is that if you really do need a statically provisioned token belonging to a ServiceAccount, then you can either retrieve an existing token from the Secret that is created by default for every ServiceAccount, or create a second Secret as described in Manually create a service account API token
The CA cert you requested is present in every Pod in the cluster at the location mentioned in the first link, as well as in the ~/.kube/config of anyone who wishes to access the cluster. kubectl config view -o yaml will show it to you.

Configuring Lets Encrypt with Traefik using Helm

I'm deploying taefik to my kubernetes cluster using helm. Here's what I have at the moment:
helm upgrade --install load-balancer --wait --set ssl.enabled=true,ssl.enforced=true,acme.enabled=true,acme.email=an#email.com stable/traefik
I'm trying to configure letsencrypt. According to this documentation - you add the domains to the bottom of the .toml file.
Looking at the code for the helm chart, there's no provision for such configuration.
Is there another way to do this or do I need to fork the chart to create my own variation of the .toml file?
Turns out this is the chicken and the egg problem, described here.
For the helm chart, if acme.enabled is set to true, then Treafik will automatically generate and serve certificates for domains configured in Kubernetes ingress rules. This is the purpose of the onHostRule = true line in the yaml file (referenced above).
To use Traefik with Let's Encrypt, we have to create an A record in our DNS server that points to the ip address of our load balancer. Which we can't do until Traefik is up and running. However, this configuration needs to exist before Traefik starts.
The only solution (at this stage) is to kill the first Pod after the A record configuration has propagated.
Note that the stable/traefik chart now supports the ACME DNS-01 protocol. By using DNS it avoids the chicken and egg problem.
See: https://github.com/kubernetes/charts/tree/master/stable/traefik#example-aws-route-53

Not Able To Create Pod in Kubernetes

I followed the official Kubernetes installation guide to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation .
After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the 101 guide.
$ create -f pod-nginx.yaml
Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Do I need to create a API token? If yes, how?
I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet.
Dose anyone have ideas on this?
The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized.
If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account.
It looks like that guide needs to be updated with the information from this comment:
https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691
openssl genrsa -out /tmp/serviceaccount.key 2048
vim /etc/kubernetes/apiserver:
KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"
vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key"
systemctl restart kube-controller-manager.service