Setup local kubectl with rancher - kubernetes

I have copied rancher config file to local kube config, and once I tried to connect, get an error
Unable to connect to the server: x509: certificate signed by unknown authority
I'm not admin of this cluster, and can't really change settings. So I googled that I can add
insecure-skip-tls-verify: true
And removed certificates, leaving only username and token, and it starts to work.
Can you explain me, is it safe to use it like so, and why do we need certs at all if it could work without it as well?

You may treat it as additional layer of security. If you allow someone ( in this case to yourself ) to connect to cluster and manage it without a need to have a proper certificate, just keep in mind you allow it for everyone else.
insecure-skip-tls-verify: true
is pretty self-explanatory - yes, it's insecure as it skips tls verification and it is not recommended on production. As you can read in documentation:
--insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections
insecure
Username and token provide some level of security as they are still required to be able to connect but it has nothing to do with establishing secure trusted connection. By default it can only be done by clients who have also proper certificate.
If you don't want to skip tls verification, you may want to try this solution. Only for kubernetes >= 1.15 use command kubeadm alpha certs renew all.
More about managing TLS Certificates in a Kubernetes Cluster you can read here.

Related

Rancher TLS Certificate Authority

Quick question, in Rancher is it possible to use lets-encrypt to sign the k8s TLS certs (etcd, kub-api, etc). I have a compliance requirement to sign my k8s environment with a valid trusted CA chain?
Yes, it is actually one of the recommended options for the source of the certificate used for TLS termination at the Rancher server:
Let’s Encrypt: The Let’s Encrypt option also uses cert-manager.
However, in this case, cert-manager is combined with a special Issuer
for Let’s Encrypt that performs all actions (including request and
validation) necessary for getting a Let’s Encrypt issued cert.
In the links below you will find a walkthrough showing how to:
Install cert-manager
Install Rancher with Helm and Your Chosen Certificate Option
This option uses cert-manager to automatically request and renew Let’s
Encrypt certificates. This is a free service that provides you with a
valid certificate as Let’s Encrypt is a trusted CA.
Please let me know if that helped.

Certificate replacement

Is there a specific method or process to replace all of the certificates required in a Kubernetes 1.7 cluster? Is this even possible?
Client is worried about using certificate auth and not being able to revoke/blacklist certs properly if someone leaves.

Fabric access with client certificate auth fails

We're using Fabric secure cluster and need client certificate for CI/CD tools.
I've created both Cluster primary certificate and client certificate with this script https://gist.github.com/kagarlickij/d63a4061a1066d3a85abcc658f0856f5
so both have been uploaded to the same Kay vault and both have been installed to local keystore on my machine.
I've added client certificate to my Fabric security settings (Authentication type = Admin client, Authorization method = Certificate thumbprint).
The problem is that I can connect (I'm using Connect-ServiceFabricCluster in PowerShell) to Fabric cluster with Cluster primary certificate but can't with Client certificate.
I'm getting this error: Connect-ServiceFabricCluster : FABRIC_E_SERVER_AUTHENTICATION_FAILED: 0x800b0109
Please advice what can be done?
Based on this link the corresponding error code for 0x800b0109 is:
A certificate chain processed, but terminated in a root certificate
which is not trusted by the trust provider.
You're using a self-signed certificate as client cert. I'm not sure it's supported as explained in the Service Fabric Security documentation, moreover you'll have to make sure the SSL certificate has been added inside your local Store.
Client X.509 certificates
Client certificates typically are not issued by a third-party CA.
Instead, the Personal store of the current user location typically
contains client certificates placed there by a root authority, with an
Intended Purposes value of Client Authentication. The client can use
this certificate when mutual authentication is required. Note
All management operations on a Service Fabric cluster require server certificates. Client certificates cannot be used for management.
I had the same issue managing my cluster through powershell, I only had 1 cert on the cluster (the one azure generates when creating the cluster) and I believe it is a client cert since I have to select it in my browser when managing the cluster.
Ultimately I had to add the self signed cert to my Root certificate store (in addition to my personal store where I already had it) to get the powershell module to stop complaining about it.

Using self-signed X509 certs to secure a production SF Cluster

I'm going down the path of figuring out the details of securing our SF Clusters. I'm finding that the docs note in a number of places not to use self-signed certs for production workloads. But nowhere does it explain why.
Can anyone from the SF team explain why a self-signed X509 cert is not as secure as one issued from a known CA? I thought the only true difference is that self-signed certs do not chain to a certified root authority, which would mean any clients might not see the cert as valid. But with node-to-node security why would this matter?
So what risk am I taking if I use self-sign certs for node-to-node or even client-to-node security of my production SF Clusters?
For client to node: As anyone can spoof your self signed certificate,
you won't be able to assert from the client you're actually talking
to the correct server. Also, there's no way to revoke a self signed
cert. Finally, end users will see that nasty security warning in the
address bar.
For node to node: same thing applies, but since it's in a vnet behind the load balancer, the
risk of tampering is lower.
Encryption of the data itself will work using either type of certificate, but a MITM attack is made easier.

How can I overcome the x509 signed by unknown certificate authority error when using the default Kubernetes API Server virtual IP?

I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.