kubeadm init --token=xyz or kubeadm init --token xyz? - kubernetes

Question
Which format of kubeadm init --token is correct?
(2/4) Initializing your master shows "--token xyz".
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
kubeadm init shows "--token=xyz".
kubeadm join --token=abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 192.168.1.1:6443
The execution log (using Ansible) showed several error message. Wonder if this is related with the format.
changed: [192.168.99.12] => {...
"[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.",
"[preflight] Running pre-flight checks",
"[preflight] Starting the kubelet service",
"[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
"[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
"[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
"[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
"[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
"[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
"[discovery] Requesting info from \"https://192.168.99.10:6443\" again to validate TLS against the pinned public key",
"[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"192.168.99.10:6443\"",
"[discovery] Successfully established connection with API Server \"192.168.99.10:6443\"",
"[bootstrap] Detected server version: v1.8.5",
"[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)",
"",
"Node join complete:",
"* Certificate signing request sent to master and response",
" received.",
"* Kubelet informed of new secure connection details.",
"",
"Run 'kubectl get nodes' on the master to see this machine join."

kubeadm uses spf13/pflag, where both notations are correct.
From the docs:
--flag // boolean flags, or flags with no option default values
--flag x // only on flags without a default value
--flag=x

As far as I know, format does not matter. It's in their output which is recommended
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
I think Token is used for security (ssl or tls) purpose so Master Node and Worker Node can communicate using encryption. It's part of TLS Handshake.
As Kubelet in worker node needs its own token to communicate with the kube-api server on the master node.

Related

Getting Unable to connect to the server: x509: certificate is valid for ingress.local, not rancher

As part of renewing our cluster certificate we have accidentally deleted our "tls-rancher-ingress secret" from local cluster, after that we are unable to access cluster through kubectl and getting error like "Getting Unable to connect to the server: x509: certificate is valid for ingress.local, not rancher",please guide us if there is any way to add the secret again without using kubectl?

Self-signed certificates ok for kubernetes validating webhooks?

I'm trying to understand the security implications for using self-signed certificates for a Kubernetes validating webhook.
If I'm understanding correctly, the certificate is simply used to be able to serve the validating webhook server over https. When the Kubernetes api-server receives a request that matches the configuration for a validating webhook, it'll first check with the validating webhook server over https. If your validating webhook server lives on the Kubernetes cluster (is not external) then this traffic is all internal to a Kubernetes cluster. If this is the case is it problematic that the cert is self-signed?
If I'm understanding correctly, the certificate is simply used to be
able to serve the validating webhook server over https.
Basically yes.
If your validating webhook server lives on the Kubernetes cluster (is
not external) then this traffic is all internal to a Kubernetes
cluster. If this is the case is it problematic that the cert is
self-signed?
If the issuing process is handled properly and in secure manner, self-signed certs shouldn't be a problem at all. Compare with this example.

Encrypt & Decrypt data between Kubernetes API Server and Client

I have two kubernetes cluster setup with kubeadm and im using haproxy to redirect and load balance traffic to the different clusters. Now I want to redirect the requests to the respective api server of each cluster.
Therefore, I need to decrypt the ssl requests, read the "Host" HTTP-Header and encrypt the traffic again. My example haproxy config file looks like this:
frontend k8s-api-server
bind *:6443 ssl crt /root/k8s/ssl/apiserver.pem
mode http
default_backend k8s-prod-master-api-server
backend k8s-prod-master-api-server
mode http
option forwardfor
server master 10.0.0.2:6443 ssl ca-file /root/k8s/ssl/ca.crt
If I now access the api server via kubectl, I get the following errors:
kubectl get pods
error: the server doesn't have a resource type "pods"
kubectl get nodes
error: the server doesn't have a resource type "nodes"
I think im using the wrong certificates for decryption and encryption.
Do I need to use the apiserver.crt , apiserver.key and ca.crt files in the directory /etc/kubernetes/pki ?
Your setup probably entails authenticating with your Kubernetes API server via client certificates; when your HAProxy reinitiates the connection it is not doing so with the client key and certificate on your local machine, and it's likely making an unauthenticated request. As such, it probably doesn't have permission to know about the pod and node resources.
An alternative is to proxy at L4 by reading the SNI header and forwarding traffic that way. This way, you don't need to read any HTTP headers, and thus you don't need to decrypt and re-encrypt the traffic. This is possible to do with HAProxy.

Kubelet certificate rotation - worker nodes

I have been running K8s cluster(v1.13.5) for a year and the control plane certs and Kubelet certs are about to expire. I found a way to rotate all the control plane certs and I wanted to know how to rotate the Kubelet certs. Can someone help me to understand how to rotate the K certs for worker node and master (if needed)? This K8s cluster is deployed using Kubespray.
From Kubernetes version 1.8.0 a beta feature is available Certificate Rotation.
The kubelet uses certificates for authenticating to the Kubernetes API. By default, these certificates are issued with one year expiration so that they do not need to be renewed too frequently.
Kubernetes 1.8 contains kubelet certificate rotation, a beta feature that will automatically generate a new key and request a new certificate from the Kubernetes API as the current certificate approaches expiration. Once the new certificate is available, it will be used for authenticating connections to the Kubernetes API.
This needs to be enabled with Feature Gates because this is a beta feature. So you need to add
--feature-gates=RotateKubeletClientCertificate=true
When a kubelet starts up, if it is configured to bootstrap (using the --bootstrap-kubeconfig flag), it will use its initial certificate to connect to the Kubernetes API and issue a certificate signing request. You can view the status of certificate signing requests using:
kubectl get csr
Initially a certificate signing request from the kubelet on a node will have a status of Pending. If the certificate signing requests meets specific criteria, it will be auto approved by the controller manager, then it will have a status of Approved. Next, the controller manager will sign a certificate, issued for the duration specified by the --experimental-cluster-signing-duration parameter, and the signed certificate will be attached to the certificate signing requests.
The kubelet will retrieve the signed certificate from the Kubernetes API and write that to disk, in the location specified by --cert-dir. Then the kubelet will use the new certificate to connect to the Kubernetes API.
As the expiration of the signed certificate approaches, the kubelet will automatically issue a new certificate signing request, using the Kubernetes API. Again, the controller manager will automatically approve the certificate request and attach a signed certificate to the certificate signing request. The kubelet will retrieve the new signed certificate from the Kubernetes API and write that to disk. Then it will update the connections it has to the Kubernetes API to reconnect using the new certificate.

How can I overcome the x509 signed by unknown certificate authority error when using the default Kubernetes API Server virtual IP?

I have a Kubernetes cluster running in High Availability mode with 3 master nodes. When I try to run the DNS cluster add-on as-is, the kube2sky application errors with an x509 signed by unknown certificate authority message for the API Server service address (which in my case is 10.100.0.1). Reading through some of the GitHub issues, it looked like Tim Hockin had fixed this type of issue via using the default service account tokens available.
All 3 of my master nodes generate their own certificates for the secured API port, so is there something special I need to do configuration-wise on the API servers to get the CA certificate included in the default service account token?
It would be ideal to have the service IP of the API in the SAN field of all your server certificates.
If this is not possible in your setup, set the clusters{}.cluster.insecure-skip-tls-verify field to true in your kubeconfig file, or the pass the --insecure-skip-tls-verify flag to kubectl.
If you are trying to reach the API from within a pod you could use the secrets mounted via the Service Account. By default, if you use the default secret, the CA certificate and a signed token are mounted to /var/run/secrets/kubernetes.io/serviceaccount/ in every pod, and any client can use them from within the pod to communicate with the API. This would help you solving the unknown certificate authority error and provide you with an easy way to authenticate against your API servers at the same time.