I use kube-lego in 0.1.5-a9592932 version, but this service are deprecated and now the time has come for migrate for cert-manger. When testing migrate I lost the secret "kube-lego-account", but I need this! It's possible forcing generate the secret? I restarted the kube-lego and check the logs and found this:
Attempting to create new secret" context=secret name=kube-lego-account namespace=default
But the secret not was created. How can I do to solve this issue?
let's take an another path, Letsencrypt official docs say that they won't be supporting any longer for below 0.8 versions, so I recommend you to install cert-manager provided by Jetstack, that you can find here, to install the helm chart for it.
The follow this stackoverflow post, for configurations, note that if the api version mentioned in that post doesn't support in case of cluster issuer, then rather use
apiVersion: cert-manager.io/v1alpha2
Note that , the tls secret name mentioned in the certificate will be auto-generated by cert-manager, and it automatically starts an acme-challenge to validate the domain, once you patch that secret name to the TLS in your ingress rule.
It shall solve the issue and the certificate's status will change to ready after the domain verification
Related
I was investigating certain things things about cert-manager.
TLS certificates are automatically recreated by cert-manager.
I need to somehow deregister a domain / certificate from being regenerated. I guess I would need to tell cert-manager not to take care about a given domain anymore.
I do not have any clue how to do that right now. Can someone help?
cert-manager is an application implemented using the operator pattern.
In one sentence, it watches for a Custom Resource (CR for short) named Certificate in the Kubernetes API and it creates and updates Secrets resources to store certificate data.
If you delete the Secret resource but don't delete the Certificate CR, cert-manager will recreate the secret for you.
The right way of "deregister a domain" or to better say it "make cert-manager not generate a certificate for a domain any more" is to delete the Certificate CR related to your domain.
To get a list of all the Certificate CRs in your cluster you can use kubectl
kubectl get certificate -A
When you found the Certificate related to the domain you want to delete, simply delete it
kubectl -n <namespace> delete certificate <certificate name>
Once you deleted the certificate CR, you might also want to delete the Secret containing the TLS cert one more time. This time cert-manager will not recreate it.
I am trying user cert-manager to manage certificate in OpenShift, but I saw some examples used apiVersion: cert-manager.io, some examples used apiVersion: certmanager.k8s.io.
I check them in my OpenShift, seems there is only certmanager.k8s.io, even though I have installed the latest cer-manager.
# oc get crd | grep certmanager.k8s.io
certificates.certmanager.k8s.io 2020-01-07T17:27:09Z
challenges.certmanager.k8s.io 2020-01-07T17:27:10Z
clusterissuers.certmanager.k8s.io 2020-01-07T17:27:08Z
issuers.certmanager.k8s.io 2020-01-07T17:27:09Z
orders.certmanager.k8s.io 2020-01-07T17:27:09Z
I am confused, what difference for them? Which one should I use? Thanks your ideas.
Due to new policies in the upstream Kubernetes project, we have renamed the
API group from certmanager.k8s.io to cert-manager.io.renamed-api-group
Here is the upstream k8s KEP K8s group-protection
You should use cert-manager.io api group in the yaml.
As per https://docs.traefik.io/configuration/acme/
I've created a secret like so:
kubectl --namespace=gitlab-managed-apps create secret generic traefik-credentials \
--from-literal=GCE_PROJECT=<id> \
--from-file=GCE_SERVICE_ACCOUNT_FILE=key.json \
And passed it to the helm chart by using: --set acme.dnsProvider.$name=traefik-credentials
However I am still getting the following error:
{"level":"error","msg":"Unable to obtain ACME certificate for domains \"traefik.my.domain.com\" detected thanks to rule \"Host:traefik.my.domain.com\" : cannot get ACME client googlecloud: Service Account file missing","time":"2019-01-14T21:44:17Z"}
I don't know why/if traefik uses GCE_SERVICE_ACCOUNT_FILE variable. All Google tooling and 3rd party integrations use GOOGLE_APPLICATION_CREDENTIALS environment variable for that purpose (and all Google API clients automatically pick up this variable). So looks like traefik might have done a poor decision here calling it something else.
I recommend you look at the Pod spec of the traefik pod (fields volumes and volumeMounts to see if the Secret is mounted to the pod correctly).
If you follow this tutorial https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform you can learn how to mount IAM Service accounts to any Pod. So maybe you can combine this with the Helm chart itself and figure out what you need to do to make this work.
I'm setting up Spinnaker in K8s with aws-ecr. My setup and steps are:
on AWS side:
Added policies ecr-pull, ecr-push, and ecr-generate-token
Attached the policy to a role
Spinnaker setup:
Modified values.yaml with below above settings:
```accounts:
name: my-ecr
address: https://123456xxx.dkr.ecr.my-region.amazonaws.com
repositories:
123456xxx.dkr.ecr..amazonaws.com/spinnaker-test-project
```
Annotated clouddriver.yaml: deployment to use created role (using the IAM role in a pod by referencing the role name in an annotation on the pod specification)
But it doesn't work and the error on the cloudrvier side is :
.d.r.p.a.DockerRegistryImageCachingAgent : Could not load tags for 1234xxxxx.dkr.ecr.<my_region>.amazonaws.com/spinnaker-test-project in https://1234xxxxx.dkr.ecr.<my_region>.amazonaws.com
Would like to get some help or advice what I'm missing, thank you
Got the answer from an official Spinnaker slack channel. That adding an iam policy to the clouddriver pod won't work unfortunately since it uses the docker client instead of the aws client. The workaround to make it work can be found here
Note* Ecr support currently is broken in halyard.This might get fixed in future after halyard migrates from the kubernetes v1 -> v2 or earlier so please verify with community or docs.
I followed the official Kubernetes installation guide to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation .
After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the 101 guide.
$ create -f pod-nginx.yaml
Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Do I need to create a API token? If yes, how?
I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet.
Dose anyone have ideas on this?
The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized.
If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account.
It looks like that guide needs to be updated with the information from this comment:
https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691
openssl genrsa -out /tmp/serviceaccount.key 2048
vim /etc/kubernetes/apiserver:
KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"
vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key"
systemctl restart kube-controller-manager.service