It is normal to use "cerbot renew" every 12 hours? - docker-compose

I have read the post about using docker with certbot and I have a question: it is normal to use "cerbot renew" every 12 hours?
I have read it on the post command about check certificate expired.
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
Post command artical: "This will check if your certificate is up for renewal every 12 hours as recommended by Let’s Encrypt."
... but I can't understend - it's create new certificate every 12 hours or it is check to expire at first.
Thanks for your attention.

certbot renew will not necessarily renew any certificate. It will check certificate expiry dates, and if they are due to expire within 30 days it will actually renew them, otherwise it will do nothing. So it's safe to call it every 12 hours.
https://eff-certbot.readthedocs.io/en/stable/using.html#renewing-certificates

Related

How to set timeout for cURL CRL checking?

curl --connect-timeout 5 --doh-url $dohUrl --max-time 10 --tlsv1.3 ....
I've tried using either --connect-timeout, --max-time or both at the same time as you can see above, still cURL wastes so much time trying to check for CRL and I want to tell it to stop doing it if it takes longer than 5 seconds. currently, cURL keeps trying CRL for 20 seconds and then throws this error:
curl: (35) schannel: next InitializeSecurityContext failed: Unknown error (0x80092013) - The revocation function was unable to check revocation because the revocation server was offline.
this is an intentional scenario that I want cURL to navigate through. I do not want to set --ssl-no-revoke because that completely skips the CRL check, I just don't want cURL to keep trying CRL for more than 5 seconds and throw that error after 5 seconds instead of 20+ seconds.
-m, --max-time
Maximum time in seconds that you allow the whole operation to
take. This is useful for preventing your batch jobs from hang‐
ing for hours due to slow networks or links going down. Since
7.32.0, this option accepts decimal values, but the actual time‐
out will decrease in accuracy as the specified timeout increases
in decimal precision.
quoting that from here. why cURL not respecting that parameter? I set it to 10 seconds but it takes more than 20 seconds just stuck at CRL checking phase. is it the problem with where in the command I use that parameter?
I don't want to do anything extra and don't want to check the certificate or CRL myself with other methods.
you can easily test it, just set incorrect DoH details in Windows settings so that DNS resolution won't work but you will still be able to access web resources using their IP addresses.

cert-manager automatic renewal isn't checking certificals with a due renewal time

I've been using cert manager for 87 days now and saw some certs which are due in 3 days (duration of 90 days) are not being renewed automatically. At first, the certificates weren't tagged with a duration or renewBefore spec. This should default to 90/30 according to the documentation.
I've already tried using cmctl to force renew, added the duration and renewBefore specs and spent a lot of time looking at the logs of all the CM pods. Since starting my debugging journey, I saw that the cert indeed got a renewalTime added to it. But the certificates are not being renewed at all. Could it be possible that cert manager isn't checking for certificates to renew?
I've got about 20 which are due this week, and ~100 this month so I would really like this auto renewal to work.
If any configuration is needed, I'll gladly provide additional info.

x509: certificate signed by unknown authority CMD K6.io

I am presently using a nice tool to perform performance testing (load) for a project via a VPN. I am using K6.io and SOAPUI. run the same tool using both tools just to compare results. K6.io is a Javascript library that gathers robust configurable metrics, and likewise SOAPUI. The challenge is that SOAPUI works smoothly while K6.io always hit a certificate issue. Since SOAPUI works fine I believe the problem is either certificate issue with my system or K6.io. I'm yet to figure out what the real problem is.
The main reason why I like k6.io is because of the robust reporting tools it integrates with like InfluxDB and Grafana. It is because I have to generate graphical reports for the test.
Error from k6.io CMD
script: AccountClosure.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
WARN[0002] Request Failed error="Post \"https://mysite.behindvpn.com:4333/fiwebservice/services/FIPWebService\": x509: certificate signed by unknown authority"
d
running (00m01.8s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU
If you don’t want to run with --insecure-skip-tls-verify 9, I think your only option is to add the root CA certificate to your local store.
On Linux this would involve the ca-certificates package and copying your cert to the correct location. This will be system dependent, but see the instructions for Ubuntu 5, otherwise consult your OS documentation.
Ivan
https://community.k6.io/t/x509-certificate-signed-by-unknown-authority/1057/2?u=ken4ward

How to update SSL certificate on IBM Cloud without downtime?

we use bluemix-letsencrypt for generating SSL certificates (as mentioned for example here).
When you run the script, at the end of the process, there is mentioned a limitation - you're not able to update existing certificate without downtime. You need to delete the old certificate first and then upload a new one. But this procedure means unacceptable downtime.
The mentioned solution is that we should use IBM Cloud console where should be possible to upload new SSL certs over the old ones, it means without downtime. This solution worked recently (2-3 months ago), but not anymore.
A few days ago I wanted to do the same as I did four times over the last 12 months (every 3 months), but the design of the console has been changed and now it's impossible to do that.
This is really bad. While we use HTTPS Strict Transport Security, any downtime of SSL certificate is critical for us.
Anyone who knows how we could solve this issue?
Thank you.

LetsEncrypt expiration certificate date issue

I am using Let’s encrypt on my production server to handle SSL certificate.
My website certificate will expire next week so I regenerated it using the letsencrypt-auto renew command (I didn’t set cron task yet)
The last log I get is 2016-08-20 17:12:20,305:DEBUG:certbot.renewal:no renewal failures which mean certificate has been successfully regenerated
But when I go back to my website and check the certificate properties it still says that it will expire next week.
So:
Does Let’s Encrypt wait the last day of certificate to update its new expiration in browser ?
Did my new certificate is not working properly which explain browser still give me next week as expiration ?
Can someone help me to clarify the way certificates expiration date works ?
Thanks for your help !
Thanks to Let's Encrypt community, I have been able to figured out what was wrong: I just needed to reload my Nginx server and it updated the expiration time for certificate !
I'll just follow up here with a bit more information, for those who are looking at this question for answers.
If you have the renew running in crontab, and you have this issue, you can specify command option: --post-hook 'some command'. And that 'some command' should be the shell command necessary to reload your web server.
Though coming late, might be useful to someone.
Even after restarting apache I still had the issue. A full machine reboot solved it for me. This will be useful only if you have full control of the server machine though.