Kubelet don't auth using TLS - kubernetes

I have deployed a Kubernetes cluster on Ubuntu VMs using Docker.
Without TLS, it work fine (on port 8080).
I use Let's Encrypt for secure API Server (port 6443), it's work ! My problem appear when my Kubelet want auth to the master using https.
This is how I launch Kubelet Api Server :
/hyperkube apiserver
--service-cluster-ip-range=10.0.0.1/24
--insecure-bind-address=127.0.0.1
--etcd-servers=http://127.0.0.1:4001
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
--client-ca-file=/srv/kubernetes/ca.crt
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--min-request-timeout=300
--tls-cert-file=/srv/kubernetes/server.cert
--tls-private-key-file=/srv/kubernetes/server.key
--token-auth-file=/srv/kubernetes/known_tokens.csv
--allow-privileged=true --v=4
And this is how I launch Kubelet :
/hyperkube kubelet \
--allow-privileged=true \
--api-servers=https://k8:6443 \
--kubeconfig=/srv/kubernetes/config.yaml \
--v=2 \
--address=0.0.0.0 \
--enable-server \
--containerized \
--cluster-dns=10.0.0.10 \
--cluster-domain=k8.local
Here is the config.yaml file :
apiVersion: v1
kind: Config
clusters:
- name: k8.local
cluster:
insecure-skip-tls-verify: true
server: https://k8:6443
contexts:
- context:
cluster: "k8.local"
user: "node1"
name: development
current-context: development
users:
- name: node1
user:
client-certificate: /var/run/kubernetes/kubelet.crt
client-key: /var/run/kubernetes/kubelet.key
When I launch my Kubelet, logs says :
the server has asked for the client to provide credentials.
I think I'm wrong with Kubelet's certs but I don't understand why.
Can you help me ?
10xx.

Were your client certs (/var/run/kubernetes/kubelet.crt) signed by the CA file identified in: --client-ca-file=/srv/kubernetes/ca.crt?
Also, you might try replacing
- cluster:
insecure-skip-tls-verify: true
server: https://k8:6443
with:
- cluster
certificate-authority: /srv/kubernetes/ca.crt
server: https://k8:6443
I've never used basic auth or token auth, but it's possible having those flags in place is requiring password based authentication. I'd try removing these as well if you're doing purely cert based authentication.
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--token-auth-file=/srv/kubernetes/known_tokens.csv

Related

Accessing a remote k3s cluster via Lens IDE

I was trying to configure a new installation of Lens IDE to work with my remote cluster (on a remote server, on a VM), but encountered some errors and can't find a proper explanation for this case.
Lens expects a config file, I gave it to it from my cluster having it changed from
server: https://127.0.0.1:6443
to
server: https://(address to the remote server):(assigned intermediate port to 6443 of the VM with the cluster)
After which in Lens I'm getting this:
2021/06/14 22:55:13 http: proxy error: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.122, not (address to the remote server)
I can see that some cert has to be reconfigured, but I'm absolutely new to the thing.
Here the full contents of the original config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0...
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0...
client-key-data: LS0...
The solution is quite obvious and easy.
k3s has to add the new IP to the certificate. Since by default, it includes only localhost and the IP of the node it's running on, if you (like me) have some kind of machine in from of it(like an lb or a dedicated firewall), the IP of one has to be added manually.
There are two ways how it can be done:
During the installation of k3s:
curl -sfL https://get.k3s.io | sh -s - server --tls-san desired IP
Or this argument can be added to already installed k3s:
sudo nano /etc/systemd/system/k3s.service
ExecStart=/usr/local/bin/k3s \
server \
'--tls-san' \
'desired IP' \
sudo systemctl daemon-reload
P.S. Although, I have faced issues with the second method.

Kubernetes Host IP For Vault Authentication with Kubernetes Engine

I was trying to Authenticate Kubernetes with External Vault using Hashicorps tutorial, https://learn.hashicorp.com/vault/identity-access-management/vault-agent-k8s
In below configuration, we have to provide the END POINT to our cluster in K8S_HOST
vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:8443" \
kubernetes_ca_cert="$SA_CA_CRT"
I have setup the Kubernetes HA Cluster in private subnet and an ALB in frontend. I need help in configuring K8S_HOST end point.
As of now i have generated SSL Certs and recreated dashboard.
Tried exposing the kubernetes-dashboard as node port.
Updated the certificate in ALB which is listening on 443.
But still it not connecting to cluster.
So my doubt is K8S_HOST:8443 is the same end point for kubernetes dashboard or something else?
Proper way to get K8S_HOST details from a cluster in private subnet.
Can someone please help on this? I am struck here.
Use kubectl config view command to view cluster configuration:
$ kubectl config view --flatten --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdOakE0TlRFd05sb1hEVE13TURNd05EQTROVEV3Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRlClg5eWZpN0JhVVlUNmhUcEUvNm02WW5HczlZSHY3SmFMOGxsWWsvOENUVjBRcUk4VjBOYnB5V3ByQjBadmV4ZmMKQ0NTQ2hkYWFlcVBQWUJDckxTSGVoVllZcE1PK2UrMVFkTFN2RmpnZUQ1UHY0NFBqTW1MeFAzVkk0MFVZOXVNNwpCcjRueVRPYnJpWXJaSVhTYjdTbWRTdFg5TUgreVVXclNrRllGSEhnREVRdXF0dFRJZ1pjdUh2MzY3Nkpyc1FuCmI1TlM0ZHJyc0U0NVZUcWYrSXR1NzRSa1VkOUsvcTNHMHN1SlVMZ3AxOUZ4ZXorYTNRenJiUTdyWTlGUEhsSG4KVno1N1dWYmt2cjMzOUxnNWd0VzB4am10Q1hJaGgzNFRqRE1OazNza0VvcFBibjJPcER5STVUMUtOL3Vsa0FmTAptcXJ4bU5VNEVVYy9NcWFoVlVrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBL3c0OEFkdFU3Tkx2d0k1S2N4Y3hMMitBT0IKV29nakFKNUMwTlBTN1NDR2tWK2d6dlcrZHZVYWVtYTFHUFlJdUJuajBVR2k2QUF5SStES0tiY01iL2dVVUdKQQp0YVpEcFpidU1lZ1JZOVZ2dlpMZXRZQndESzkvWk9lYnF1MGh6eWo4TzduTnJaM3RIb3h6VW1MaVVIU2Jmc0R1CnkvaE9IM0wvUE1mZ0FFaHF5SVZwWGMvQzZCYWNlOEtRSWJMQ0hYZmZjTGhEWDQ0THZYSXVIL1Y3LzN1cHcxWm8KK05NcFY5Sys4TTExNHV2bWdyOHdTNkZHYlltdXFVZy9CTlpRd2FqKzVWMEZ6azZzeHoySTdZSXI3NHVNK3BLRgpMS3lEQzJnK2NXTU5YZTV0S0YrVG5zUXE1eWtNVEJKeHl1bTh5a3VtZTE4MGcyS1o3NzVTdVF1Ni9kND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32769 # <<-----------here
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
... ... ...
Copy the server address and use it as kubernetes_host while configuring Vault kubernetes auth method.
$ vault write auth/kubernetes/config \
token_reviewer_jwt="eyJhbGciOiJSUz....." \
kubernetes_host="https://127.0.0.1:32769" \
kubernetes_ca_cert=#examples/guides/vault-server/ca.crt
N.B.: If the server address does not contain a port number, no need to add them. Keep the address as it is.
Demo sever address for GKE:
server: https://35.203.181.169
Demo server address for DigitalOcean k8s cluster:
server: https://e8dabcb3-**bb-451e****d5.k8s.ondigitalocean.com
This has been resolved.
I have to add the alb dns name to api server certificate and then reboot all node.
Authentication is now working fine.

certificate signed by unknown authority when connect to remote kubernetes cluster using kubectl

I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.

Kubernetes initial password (GCP)? (not using kops?)

Generally you can use kops get secrets kube --type secret -oplaintext, but I am not running on AWS and am using GCP.
I read that kubectl config view should show you this info, but I see no such thing (wondering if this has to do with GCP serviceaccount setup, am also using GKE).
The kubectl config view returns something like:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://MY_IP
name: MY_CLUSTER_NAME
contexts:
- context:
cluster: MY_CLUSTER_NAME
user: MY_CLUSTER_NAME
name: MY_CLUSTER_NAME
current-context: MY_CONTEXT_NAME
kind: Config
preferences: {}
users:
- name: MY_CLUSTER_NAME
user:
auth-provider:
config:
access-token: MY_ACCESS_TOKEN
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: 2019-02-27T03:20:49Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Neither Username=>Admin or Username=>MY_CLUSTER_NAME worked with Password=>MY_ACCESS_TOKEN
Any ideas?
Try:
gcloud container clusters describe ${CLUSTER} \
--flatten="masterAuth"
[--zone=${ZONE}|--region=${REGION} \
--project=${PROJECT}
It's possible that your cluster has basic authentication (username|password) disabled as this authentication mechanism is discouraged.
An alternative mechanism provided with Kubernetes Engine is (as shown in your config) is to use your gcloud credentials to get you onto the cluster.
The following command will configure ~/.kube/config so that you may access the cluster using your gcloud credentials. It looks as though this step has been done and you can use kubectl directly.
gcloud container clusters get-credentials ${CLUSTER} \
[--zone=${ZONE}|--region=${REGION}] \
--project=${PROJECT}
As long as you're logged in using gcloud with an account that's permitted to use the cluster, you should be able to:
kubectl cluster-info
kubectl get nodes

Connecting to kubernetes cluster from different kubectl clients

I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help
Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')