I have a macbook (192.168.1.101) and a macmini(192.168.1.104) over same wifi.
I launched a k8s cluster through docker-desktop on macmini and would like to access it through kubectl on macbook.
Here is how my ~/.kube/config on macmini looks like:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ******
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-for-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: ******
client-key-data: ******
How can I write ~/.kube/config on macbook? Currently I followed official doc and got following errors.
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: http://192.168.1.104:6443
name: macmini-cluster
contexts:
- context:
cluster: macmini-cluster
user: macmini-user
name: macmini-context
current-context: macmini-context
kind: Config
preferences: {}
users:
- name: macmini-user
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get pods
The connection to the server 192.168.1.104 was refused - did you specify the right host or port?
Update:
I added port 6443 to server of cluster and tried to telnet macmini's port 6443, but got:
$ telnet 192.168.1.104 6443
Trying 192.168.1.104...
telnet: connect to address 192.168.1.104: Connection refused
telnet: Unable to connect to remote host
When I checked on macmini:
$ netstat -na|grep 6443
tcp4 0 0 127.0.0.1.6443 *.* LISTEN
There seems to be an unresolved related issue.
It seems your kubernetes api server did not bind to a local network accessible ipv4 address, instead it is bound to host's loopback adapter at 127.0.0.1
$ netstat -na|grep 6443
tcp4 0 0 127.0.0.1.6443 *.* LISTEN
Which means it can only be accessed by the machine running the process.
You need to proxy this port to your local ipv4 network. You can do this as below with command prompt running in kubernetes host computer as administrator:
netsh interface portproxy add v4tov4 listenaddress=192.168.1.104 listenport=6443 connectaddress=127.0.0.1 connectport=6443
In the macbook, the port number has to be specified as below. That's the port number of the K8S APIServer. (1)
server: http://192.168.1.104:6443
You can just copy your .kube/config file from the mac-mini desktop to macbook, you dont have to write the config file again if you want to use the same context.
There's an internal hostname docker-desktop pointing to kubernetes api-server, however, this hostname can be accessed by any of the inside containers without the --link option, which we can give a hack below to make a port-forwarding trick
docker run -d -p 0.0.0.0:6444:6443 bobrik/socat TCP-LISTEN:6443,fork TCP:docker-desktop:6443
I once thought to leverage kubernetes service, but no time to keep digging, hope anyone else has any ideas on this trick.
In addition to that, don't forget to make a little change on your ~/.kube/config below to avoid the x509 certificate verification
clusters:
- cluster:
server: https://<your docker host>:6444
insecure-skip-tls-verify: true
name: docker-desktop
Related
I have installed a kubernetes cluster on EC2 instances on AWS.
1 master node and 2 worker nodes.
Everything works fine when I connect to the master node and issue commands using kubectl.
But I want to be able to issue kubectl commands from my local machine.
So I copied the contents of .kube/config file from master node to my local machine's .kube/config.
I have only changed the ip address of the server because the original file references to an internal ip. The file looks like this now :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
When I try to use a kubectl command from my local machine I get this error :
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257
This is bcs the kube-api server TLS cert is only valid for 10.96.0.1, 172.31.4.108 and not for 35.166.48.257. There are several options, like to tell kubectl the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.
Both ways are described here
Next time for a kubeadm Cluster you can use --apiserver-cert-extra-sans=EXTERNAL_IP at the cluster init to also add the external IP to the API Server TLS cert.
I was trying to configure a new installation of Lens IDE to work with my remote cluster (on a remote server, on a VM), but encountered some errors and can't find a proper explanation for this case.
Lens expects a config file, I gave it to it from my cluster having it changed from
server: https://127.0.0.1:6443
to
server: https://(address to the remote server):(assigned intermediate port to 6443 of the VM with the cluster)
After which in Lens I'm getting this:
2021/06/14 22:55:13 http: proxy error: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.122, not (address to the remote server)
I can see that some cert has to be reconfigured, but I'm absolutely new to the thing.
Here the full contents of the original config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0...
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0...
client-key-data: LS0...
The solution is quite obvious and easy.
k3s has to add the new IP to the certificate. Since by default, it includes only localhost and the IP of the node it's running on, if you (like me) have some kind of machine in from of it(like an lb or a dedicated firewall), the IP of one has to be added manually.
There are two ways how it can be done:
During the installation of k3s:
curl -sfL https://get.k3s.io | sh -s - server --tls-san desired IP
Or this argument can be added to already installed k3s:
sudo nano /etc/systemd/system/k3s.service
ExecStart=/usr/local/bin/k3s \
server \
'--tls-san' \
'desired IP' \
sudo systemctl daemon-reload
P.S. Although, I have faced issues with the second method.
I have a microk8s cluster composed of several Raspberry Pi 4, behind a Linksys router.
My computer and the cluster router are connected on my ISP router, and are respectively 192.168.0.10 & 192.168.0.2.
The cluster's subnet is composed of the following :
router : 192.168.1.10
microk8s master : 192.168.1.100 (fixed IP)
microk8s workers : 192.168.1.10X (via DHCP).
I can ssh from my computer to the master via a port forwarding 192.168.0.2:22 > 192.168.1.100:22
I can nmap the cluster via a port forwarding 192.168.0.2:16443 > 192.168.1.100:16443 (16443 being the API port for microk3s)
But I can't call the k8s API :
kubectl cluster-info
returns
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 10.152.183.1, 192.168.1.100, fc00::16d, fc00::dea6:32ff:fecc:a007, not 192.168.0.2
I've tried using the --insecure-skip-tls-verify, but :
error: You must be logged in to the server (Unauthorized)
My local (laptop) config is the following :
> kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.2:16443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I'd say I'd like to add 192.168.0.2 to the certificate, but all the answers I can find online refer to the --insecure-skip-tls-verify flag.
Can you help please ?
I was trying out the Tectonic Kubernetes sandbox setup and according to their documentation:
https://coreos.com/tectonic/docs/latest/tutorials/first-app.html
I did download the kubectl and the corresponding kube-config files, but when I tried to get the namespaces using the following command:
kubectl get namespaces
I get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What is this? From where is it picking up this port locahost:8080?
EDIT:
Joe-MacBook-Pro:~ joe$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
Joe-MacBook-Pro:~ joe$
I'm lacking some details on your setup, but the problem is basically clear - you're not connected to the cluster.
You should have a kubeconfig file containing the cluster connection information i.e. the context, I assume if you run kubectl config view you'll get nothing.
I'm on windows using git bash, if I run the same command I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://platform-svc-integration.net
name: svc-integration
contexts:
- context:
cluster: svc-integration
user: svc-integration-admin
name: svc-integration-system
current-context: svc-integration-system
kind: Config
preferences: {}
users:
- name: svc-integration-admin
user:
client-certificate: <path>/admin/admin.crt
client-key: <path>/admin/admin.key
basically what I'm trying to say is you need to configure your context, start by doing kubectl config --help to list your options, it's pretty straight forward but if don't manage just refer to the documentation.
I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help
Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')