I am trying to access the Argo CD on my https://127.0.0.1:8080/ and got the following error message:
I forwarded the port as suggested on the https://argoproj.github.io/argo-cd/getting_started/ website:
kubectl port-forward svc/argocd-server -n argocd 8080:443
I also installed the Argo CD certificate, as you can see on the bottom:
When I try to access via Firefox, then it works:
Why with Chromium does not work?
The certificate appears invalid (self signed) in either browser. I believe by default Chrome blocks self signed certificates against localhost "for users protection" https://support.google.com/chrome/thread/3321715?hl=en. Changing the setting chrome://flags/#allow-insecure-localhost should allow you to access the site on Chrome.
Firefox will show a warning, but does not block users from using an insecure site by default.
By default Argo (and most things) will create a self-signed HTTPS certificate. This makes setup easier, but since it's not signed by a trusted source, you get this error. You can either give Argo a real cert directly, or use something like the Ingress system to terminate TLS (or both). Check out cert-manager for issuing LetsEncrypt certs in Kubernetes.
Related
Facing an issue with the below error reason in kubernetes deployment for the HTTPS Certificate
Error : Host name does not match the certificate subject provided by the peer (CN=customer.endpoint.com)
My application is running with Network ip address with port number. Network ip is dynamic for the pods. So how do we alias customer.endpoint.com to avoid the above issue
To access your application first you have to create service for it. Read more here: kubernetes-services.
Then you have to create a TLS certificate for a Kubernetes service accessed through DNS.
Please take a look at tls-certificates. In this documentation you will find how to properly set up certificates.
The flow will be like:
1. Create service to expose you app - for example ClusterIP.
Remember that choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
2. Download and install CFSSL - source: pkg-cfssl.
3. Create a Certificate Signing Request
4. Create a Certificate Signing Request object to send to the Kubernetes API
5. Get the Certificate Signing Request Approved
6. Download the Certificate and use it
I have copied rancher config file to local kube config, and once I tried to connect, get an error
Unable to connect to the server: x509: certificate signed by unknown authority
I'm not admin of this cluster, and can't really change settings. So I googled that I can add
insecure-skip-tls-verify: true
And removed certificates, leaving only username and token, and it starts to work.
Can you explain me, is it safe to use it like so, and why do we need certs at all if it could work without it as well?
You may treat it as additional layer of security. If you allow someone ( in this case to yourself ) to connect to cluster and manage it without a need to have a proper certificate, just keep in mind you allow it for everyone else.
insecure-skip-tls-verify: true
is pretty self-explanatory - yes, it's insecure as it skips tls verification and it is not recommended on production. As you can read in documentation:
--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections
insecure
Username and token provide some level of security as they are still required to be able to connect but it has nothing to do with establishing secure trusted connection. By default it can only be done by clients who have also proper certificate.
If you don't want to skip tls verification, you may want to try this solution. Only for kubernetes >= 1.15 use command kubeadm alpha certs renew all.
More about managing TLS Certificates in a Kubernetes Cluster you can read here.
I'm trying to configure nginx-ingress for Kubernetes so that client certificate is passed on to backend service without validation.
Due to customer requests we are using client certificates for authentication but the certificates are not signed by the server and as such it's not really our job to validate the certificate, only to check that it is on the allowed certificates list.
Things work well on our test server where we use certificates signed by the servers CA certificate but after setting
nginx.ingress.kubernetes.io/auth-tls-verify-client: "off"
the client certificate is no longer forwarded in request header even though we still have
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
Does nginx-ingress even allow passing through client certificates without validation?
I expected that disabling auth-tls-verify-client would just stop nginx-ingress from validating the certificate signature and still pass it through, but instead it disappeared from the request.
May I suggest looking at this issue
When adding auth-tls-pass-certificate-to-upstream: true to an ingress resource, the client certificate passed to the ingress controller is not forwarded to the backend pod.
Apparently, the issues was
the issue to be more around standardization of passing client certificates in headers. I've found nginx is passing the client cert to the backend pod in the Ssl-client-certificate header.
In your case, I would suggest checking all the header values for the certificate, and see if it is there under a different name.
On top of this, remember to check that you **start the nginx ingress controller with flag enable-ssl-passthrough
Is there a reason why Facebook doesn't allow LetsEncrypt signed certificates in their "app development" section?
I keep getting this error:
(For the untrained eye, this is me trying to setup a webhook for new messages notifications)
Blurred out the host, but it's a valid host and using chrome or firefox on Linux and Windows doesn't give any errors.
SSLLabs also says the site is perfectly valid.
Running curl https://... on my own host, sure enough I get the same error,
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
So my question is, why have Facebook (that openly supports LetsEncrypt) decided to use default curl CA bundle to verify the callback-url of an app? If that doesn't allow LetsEncrypt?
It appears to be counterproductive to me.
Is there a way around this?
SSLLabs also says the site is perfectly valid.
It shows a warning in orange, that the certificate chain is incomplete.
Your server should present all necessary intermediate certificates as well, in addition to the certificate issued for your domain. (Which was simply forgotten here by mistake.)
I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.