Connecting to kubernetes cluster from different kubectl clients - kubernetes

I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help

Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')

Related

connect to kubernetes cluster from local machine using kubectl

I have installed a kubernetes cluster on EC2 instances on AWS.
1 master node and 2 worker nodes.
Everything works fine when I connect to the master node and issue commands using kubectl.
But I want to be able to issue kubectl commands from my local machine.
So I copied the contents of .kube/config file from master node to my local machine's .kube/config.
I have only changed the ip address of the server because the original file references to an internal ip. The file looks like this now :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
When I try to use a kubectl command from my local machine I get this error :
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257
This is bcs the kube-api server TLS cert is only valid for 10.96.0.1, 172.31.4.108 and not for 35.166.48.257. There are several options, like to tell kubectl the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.
Both ways are described here
Next time for a kubeadm Cluster you can use --apiserver-cert-extra-sans=EXTERNAL_IP at the cluster init to also add the external IP to the API Server TLS cert.

kubectl minikube renew certificate

I'm using kubectl to access the api server on my minikube cluster on ubuntu
but when try to use kubectl command I got an error certificate expired:
/home/ayoub# kubectl get pods
Unable to connect to the server: x509: certificate has expired or is not yet valid: current time 2021-08-30T14:39:50+01:00 is before 2021-08-30T14:20:10Z
Here's my kubectl config:
/home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub# /home/ayoub# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: REDACTED
root#ayoub-Lenovo-ideapad-720S-13IKB:/home/ayoub#
How I can renew this certificate?
Posted community wiki for better visibility. Feel free to expand it.
There is similar issue opened on minikube GitHub.
The temporary workaround is to remove some files in the /var/lib/minikube/ directory, then reset Kubernetes cluster and replace keys on the host. Those steps are described in this answer.
The solution described in this blog solved the problem for me:
https://dev.to/boris/microk8s-unable-to-connect-to-the-server-x509-certificate-has-expired-or-is-not-yet-valid-2b73
In summary:
Run sudo microk8s.refresh-certs then restarting the servers to reboot the microk8s cluster
minikube delete - deletes the local Kubernetes cluster - worked for me
reference:
https://github.com/kubernetes/minikube/issues/10122#issuecomment-758227950

Accessing microk8s API for cluster behind router

I have a microk8s cluster composed of several Raspberry Pi 4, behind a Linksys router.
My computer and the cluster router are connected on my ISP router, and are respectively 192.168.0.10 & 192.168.0.2.
The cluster's subnet is composed of the following :
router : 192.168.1.10
microk8s master : 192.168.1.100 (fixed IP)
microk8s workers : 192.168.1.10X (via DHCP).
I can ssh from my computer to the master via a port forwarding 192.168.0.2:22 > 192.168.1.100:22
I can nmap the cluster via a port forwarding 192.168.0.2:16443 > 192.168.1.100:16443 (16443 being the API port for microk3s)
But I can't call the k8s API :
kubectl cluster-info
returns
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 10.152.183.1, 192.168.1.100, fc00::16d, fc00::dea6:32ff:fecc:a007, not 192.168.0.2
I've tried using the --insecure-skip-tls-verify, but :
error: You must be logged in to the server (Unauthorized)
My local (laptop) config is the following :
> kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.2:16443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I'd say I'd like to add 192.168.0.2 to the certificate, but all the answers I can find online refer to the --insecure-skip-tls-verify flag.
Can you help please ?

certificate signed by unknown authority when connect to remote kubernetes cluster using kubectl

I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.

ssl authentication for gcp kubernetes cluster is not working

For an automation purpose, I have generated the kubernetes configuration file using the below API.
request = service.projects().zones().clusters()
.get(projectId=project_id, zone=zone, clusterId=cluster_id)
The cluster is having both basic & ssl configurations enabled and only the basic authentication is working properly. When I changed the user context from admin to ca-user, I am getting the below error.
Error from server (Forbidden): nodes is forbidden: User "client" cannot list nodes at the cluster scope: Unknown user "client"
The generated configuration file is given below.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: *************
server: https://*******
name: gke_demo-205812_us-central1-a_cluster-1
contexts:
- context:
cluster: gke_demo-205812_us-central1-a_cluster-1
user: ca-user
name: gke_demo-205812_us-central1-a_cluster-1
current-context: gke_demo-205812_us-central1-a_cluster-1
kind: Config
preferences: {}
users:
- name: admin
user:
password: *****************
username: admin
- name: ca-user
user:
client-certificate-data: ******************
client-key-data: ************************
Thanks in Advance. :)
Try after running this command:
kubectl create clusterrolebinding client-admin \
--clusterrole=cluster-admin \
--user=client
You are giving cluster-admin permission to this user.