I have a microk8s cluster composed of several Raspberry Pi 4, behind a Linksys router.
My computer and the cluster router are connected on my ISP router, and are respectively 192.168.0.10 & 192.168.0.2.
The cluster's subnet is composed of the following :
router : 192.168.1.10
microk8s master : 192.168.1.100 (fixed IP)
microk8s workers : 192.168.1.10X (via DHCP).
I can ssh from my computer to the master via a port forwarding 192.168.0.2:22 > 192.168.1.100:22
I can nmap the cluster via a port forwarding 192.168.0.2:16443 > 192.168.1.100:16443 (16443 being the API port for microk3s)
But I can't call the k8s API :
kubectl cluster-info
returns
Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 10.152.183.1, 192.168.1.100, fc00::16d, fc00::dea6:32ff:fecc:a007, not 192.168.0.2
I've tried using the --insecure-skip-tls-verify, but :
error: You must be logged in to the server (Unauthorized)
My local (laptop) config is the following :
> kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.2:16443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
I'd say I'd like to add 192.168.0.2 to the certificate, but all the answers I can find online refer to the --insecure-skip-tls-verify flag.
Can you help please ?
Related
I configured a kubernetes cluster with rke on premise (for now single node - control plane, worker and etcd).
The VM which I launched the cluster on is inside a VPN.
After succesfully initializng the cluster, I managed to access the cluster with kubectl from inside the VPN.
I tried to access the cluster outside of the VPN so I updated the kubeconfig file and changed the followings:
server: https://<the VM IP> to be server: https://<the external IP>.
I also exposed port 6443.
When trying to access the cluster I get the following error:
E0912 16:23:39 proxy_server.go:147] Error while proxying request: x509: certificate is valid for <the VM IP>, 127.0.0.1, 10.43.0.1, not <the external IP>
My question is, how can I add the external IP to the certificate so I will be able to access the cluster with kubectl outside of the VPN.
The rke configuration yml.
# config.yml
nodes:
- address: <VM IP>
hostname_override: control-plane-telemesser
role: [controlplane, etcd, worker]
user: motti
ssh_key_path: /home/<USR>/.ssh/id_rsa
ssh_key_path: /home/<USR>/.ssh/id_rsa
cluster_name: my-cluster
ignore_docker_version: false
kubernetes_version:
services:
etcd:
backup_config:
interval_hours: 12
retention: 6
snapshot: true
creation: 6h
retention: 24h
kube-api:
service_cluster_ip_range: 10.43.0.0/16
service_node_port_range: 30000-32767
pod_security_policy: false
kube-controller:
cluster_cidr: 10.42.0.0/16
service_cluster_ip_range: 10.43.0.0/16
kubelet:
cluster_domain: cluster.local
cluster_dns_server: 10.43.0.10
fail_swap_on: false
extra_args:
max-pods: 110
network:
plugin: flannel
options:
flannel_backend_type: vxlan
dns:
provider: coredns
authentication:
strategy: x509
authorization:
mode: rbac
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
monitoring:
provider: metrics-server
Thanks,
So I found the solution for RKE cluster configuration.
You to add sans to the the cluster.yml file at the authentication section:
authentication:
strategy: x509
sans:
- "10.18.160.10"
After you saved the file just run rke up again and it will update the cluster.
I have installed a kubernetes cluster on EC2 instances on AWS.
1 master node and 2 worker nodes.
Everything works fine when I connect to the master node and issue commands using kubectl.
But I want to be able to issue kubectl commands from my local machine.
So I copied the contents of .kube/config file from master node to my local machine's .kube/config.
I have only changed the ip address of the server because the original file references to an internal ip. The file looks like this now :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
When I try to use a kubectl command from my local machine I get this error :
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257
This is bcs the kube-api server TLS cert is only valid for 10.96.0.1, 172.31.4.108 and not for 35.166.48.257. There are several options, like to tell kubectl the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.
Both ways are described here
Next time for a kubeadm Cluster you can use --apiserver-cert-extra-sans=EXTERNAL_IP at the cluster init to also add the external IP to the API Server TLS cert.
I have a macbook (192.168.1.101) and a macmini(192.168.1.104) over same wifi.
I launched a k8s cluster through docker-desktop on macmini and would like to access it through kubectl on macbook.
Here is how my ~/.kube/config on macmini looks like:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ******
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-for-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: ******
client-key-data: ******
How can I write ~/.kube/config on macbook? Currently I followed official doc and got following errors.
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: http://192.168.1.104:6443
name: macmini-cluster
contexts:
- context:
cluster: macmini-cluster
user: macmini-user
name: macmini-context
current-context: macmini-context
kind: Config
preferences: {}
users:
- name: macmini-user
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
$ kubectl get pods
The connection to the server 192.168.1.104 was refused - did you specify the right host or port?
Update:
I added port 6443 to server of cluster and tried to telnet macmini's port 6443, but got:
$ telnet 192.168.1.104 6443
Trying 192.168.1.104...
telnet: connect to address 192.168.1.104: Connection refused
telnet: Unable to connect to remote host
When I checked on macmini:
$ netstat -na|grep 6443
tcp4 0 0 127.0.0.1.6443 *.* LISTEN
There seems to be an unresolved related issue.
It seems your kubernetes api server did not bind to a local network accessible ipv4 address, instead it is bound to host's loopback adapter at 127.0.0.1
$ netstat -na|grep 6443
tcp4 0 0 127.0.0.1.6443 *.* LISTEN
Which means it can only be accessed by the machine running the process.
You need to proxy this port to your local ipv4 network. You can do this as below with command prompt running in kubernetes host computer as administrator:
netsh interface portproxy add v4tov4 listenaddress=192.168.1.104 listenport=6443 connectaddress=127.0.0.1 connectport=6443
In the macbook, the port number has to be specified as below. That's the port number of the K8S APIServer. (1)
server: http://192.168.1.104:6443
You can just copy your .kube/config file from the mac-mini desktop to macbook, you dont have to write the config file again if you want to use the same context.
There's an internal hostname docker-desktop pointing to kubernetes api-server, however, this hostname can be accessed by any of the inside containers without the --link option, which we can give a hack below to make a port-forwarding trick
docker run -d -p 0.0.0.0:6444:6443 bobrik/socat TCP-LISTEN:6443,fork TCP:docker-desktop:6443
I once thought to leverage kubernetes service, but no time to keep digging, hope anyone else has any ideas on this trick.
In addition to that, don't forget to make a little change on your ~/.kube/config below to avoid the x509 certificate verification
clusters:
- cluster:
server: https://<your docker host>:6444
insecure-skip-tls-verify: true
name: docker-desktop
I was trying to Authenticate Kubernetes with External Vault using Hashicorps tutorial, https://learn.hashicorp.com/vault/identity-access-management/vault-agent-k8s
In below configuration, we have to provide the END POINT to our cluster in K8S_HOST
vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:8443" \
kubernetes_ca_cert="$SA_CA_CRT"
I have setup the Kubernetes HA Cluster in private subnet and an ALB in frontend. I need help in configuring K8S_HOST end point.
As of now i have generated SSL Certs and recreated dashboard.
Tried exposing the kubernetes-dashboard as node port.
Updated the certificate in ALB which is listening on 443.
But still it not connecting to cluster.
So my doubt is K8S_HOST:8443 is the same end point for kubernetes dashboard or something else?
Proper way to get K8S_HOST details from a cluster in private subnet.
Can someone please help on this? I am struck here.
Use kubectl config view command to view cluster configuration:
$ kubectl config view --flatten --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdOakE0TlRFd05sb1hEVE13TURNd05EQTROVEV3Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRlClg5eWZpN0JhVVlUNmhUcEUvNm02WW5HczlZSHY3SmFMOGxsWWsvOENUVjBRcUk4VjBOYnB5V3ByQjBadmV4ZmMKQ0NTQ2hkYWFlcVBQWUJDckxTSGVoVllZcE1PK2UrMVFkTFN2RmpnZUQ1UHY0NFBqTW1MeFAzVkk0MFVZOXVNNwpCcjRueVRPYnJpWXJaSVhTYjdTbWRTdFg5TUgreVVXclNrRllGSEhnREVRdXF0dFRJZ1pjdUh2MzY3Nkpyc1FuCmI1TlM0ZHJyc0U0NVZUcWYrSXR1NzRSa1VkOUsvcTNHMHN1SlVMZ3AxOUZ4ZXorYTNRenJiUTdyWTlGUEhsSG4KVno1N1dWYmt2cjMzOUxnNWd0VzB4am10Q1hJaGgzNFRqRE1OazNza0VvcFBibjJPcER5STVUMUtOL3Vsa0FmTAptcXJ4bU5VNEVVYy9NcWFoVlVrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBL3c0OEFkdFU3Tkx2d0k1S2N4Y3hMMitBT0IKV29nakFKNUMwTlBTN1NDR2tWK2d6dlcrZHZVYWVtYTFHUFlJdUJuajBVR2k2QUF5SStES0tiY01iL2dVVUdKQQp0YVpEcFpidU1lZ1JZOVZ2dlpMZXRZQndESzkvWk9lYnF1MGh6eWo4TzduTnJaM3RIb3h6VW1MaVVIU2Jmc0R1CnkvaE9IM0wvUE1mZ0FFaHF5SVZwWGMvQzZCYWNlOEtRSWJMQ0hYZmZjTGhEWDQ0THZYSXVIL1Y3LzN1cHcxWm8KK05NcFY5Sys4TTExNHV2bWdyOHdTNkZHYlltdXFVZy9CTlpRd2FqKzVWMEZ6azZzeHoySTdZSXI3NHVNK3BLRgpMS3lEQzJnK2NXTU5YZTV0S0YrVG5zUXE1eWtNVEJKeHl1bTh5a3VtZTE4MGcyS1o3NzVTdVF1Ni9kND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32769 # <<-----------here
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
... ... ...
Copy the server address and use it as kubernetes_host while configuring Vault kubernetes auth method.
$ vault write auth/kubernetes/config \
token_reviewer_jwt="eyJhbGciOiJSUz....." \
kubernetes_host="https://127.0.0.1:32769" \
kubernetes_ca_cert=#examples/guides/vault-server/ca.crt
N.B.: If the server address does not contain a port number, no need to add them. Keep the address as it is.
Demo sever address for GKE:
server: https://35.203.181.169
Demo server address for DigitalOcean k8s cluster:
server: https://e8dabcb3-**bb-451e****d5.k8s.ondigitalocean.com
This has been resolved.
I have to add the alb dns name to api server certificate and then reboot all node.
Authentication is now working fine.
I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help
Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')