Configure Kubectl to connect to a local network Kubernetes cluster - kubernetes

I'm trying to connect to a kubernetes cluster running on my Windows PC from my Mac. This is so I can continue to develop from my Mac but run everything on a machine with more resources. I know that to do this I need to change the kubectl context on my Mac to point towards my Windows PC but don't know how to manually do this.
When I've connected to a cluster before on AKS, I would use az aks get-credentials and this would correctly an entry to .kube/config and change the context to it. I'm basically trying to do this but on a local network.
I've tried to add an entry into kubeconfig but get The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?. I've also checked my antivirus on the Windows computer and no requests are getting blocked.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {CERT}
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
current-context: windows-docker-desktop
kind: Config
preferences: {}
users:
- name: windows-docker-desktop
user:
client-certificate-data: {CERT}
client-key-data: {KEY}
I've also tried using kubectl --insecure-skip-tls-verify --context=windows-docker-desktop get pods which results in the same error: The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?.
Many thanks.

From your MAC try if the port is open: Like nc -zv 192.168.yourwindowsIp 6443. If it doest respond Open, you have a network problem.
Try this.
clusters:
- cluster:
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
insecure-skip-tls-verify: true
directly in the configfile
the set-context you dont need to specify as you have only one.
To be sure it is not your firewall, disable it just for a very short period, only to test the conection.
Last thing: Seems you are using Kubernetes in Docker-Desktop. If not and you have a local cluster with more than 1 node, you need to install a network fabric in your cluster like Flannel or Calico.
https://projectcalico.docs.tigera.io/about/about-calico
https://github.com/flannel-io/flannel

Related

Docker Desktop on Windows11: kubectl get pods gives error on my laptop: The connection to the server 127.0.0.1:49994 was refused

I want to run Kubernetes locally so I can try it out on a small Java program I have written.
I installed WSL2 on my Windows 11 laptop, Docker Desktop, and enabled Kubernetes in Docker Settings.
There are a number of SO questions with the same error but I do not see any of them regarding Windows 11 and Docker Desktop.
I open a terminal, type wsl to open a linux terminal. Then I issue the command:
$ kubectl get pods
The connection to the server 127.0.0.1:49994 was refused - did you specify the right host or port?
but I see
Using Docker Desktop and Kubernetes on Linux Ubuntu, I got the same error, but also with Docker Desktop being unable to start normally because I already had a Docker Installation on my machine, resulting in the docker context being set to the default Docker environment instead of the required Docker Desktop.
Confirm the following first:
Make sure kubectl is correctly installed and ~/.kube/config file exists and is correctly configured on your machine because it holds the configuration of the cluster info and the port to connect to, which both are used by kubectl.
Check the context with
kubectl config view
If not set to current-context: docker-desktop, for example
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
then Set the docker context to Docker Desktop on your machine
kubectl config use-context docker-desktop
If that doesn't solve your issue maybe you have to check for specific Windows 11 Docker Desktop Kubernetes configuration/features
Check also:
Docker Desktop Windows FAQs

Kubectl commands cannot be executed from another VM

I'm having an issue when executing the "kubectl" commands. In fact, my cluster consists of one Master and one Worker node. The kubectl commands can be executed from the Master server without having an issue. But, I also have another VM which I use that VM as a Jump server to login to the master and worker nodes. I need to execute the kubectl commands from that Jump server. I created the .kube directory, and copied the kubeconfig file from the Master node to the Jump server. And also I set the context correctly as well. But the kubectl commands hangs when executing from the Jump server and it gives a timeout error.
Below are the information.
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.240.0.30:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
ubuntu#ansible:~$ kubectl config use-context kubernetes-admin#kubernetes
Switched to context "kubernetes-admin#kubernetes".
ubuntu#ansible:~$ kubectl get pods
Unable to connect to the server: dial tcp 10.240.0.30:6443: i/o timeout
ubuntu#ansible:~$ kubectl config current-context
kubernetes-admin#kubernetes
Everything seems to be OK for me and wondering why kubectl commands hang when wxecuting from the Jump server.
Troubleshooted the issue by verifying whether the Jump VM can telnet to Kubernetes Master Node by executing the below.
telnet <ip-address-of-the-kubernetes-master-node> 6443
Since the error was a "Connection Timed Out" I had to add a firewall rule to Kubernetes Master node. Added a firewall rule as below. Note: In my case I'm using GCP.
gcloud compute firewall-rules create allow-kubernetes-apiserver \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes \
--source-ranges 0.0.0.0/0
Then I was able to telnet to the Master Node without any issue. If still you can't get connected to the Master node, change the Internal IP in the kubconfig file under .kube directory to the Public IP address of the Master node.
Then change the context using below command.
kubectl config set-context <context-name>

How to access locally installed postgresql in microk8s cluster

I have installed Postgresql and microk8s on Ubuntu 18.
One of my microservice which is inside microk8s single node cluster needs to access postgresql installed on same VM.
Some articles suggesting that I should create service.yml and endpoint.yml like this.
apiVersion: v1
metadata:
name: postgresql
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
---
kind: Endpoints
apiVersion: v1
metadata:
name: postgresql
subsets:
- addresses:
- ip: ?????
ports:
- port: 5432
Now, I am not getting what should I put in subsets.addresses.ip field ?
First you need to configure your Postgresql to listen not only on your vm's localhost. Let's assume you have a network interface with IP address 10.1.2.3, which is configured on your node, on which Postgresql instance is installed.
Add the following entry in your /etc/postgresql/10/main/postgresql.conf:
listen_addresses = 'localhost,10.1.2.3'
and restart your postgres service:
sudo systemctl restart postgresql
You can check if it listens on the desired address by running:
sudo ss -ntlp | grep postgres
From your Pods deployed within your Microk8s cluster you should be able to reach IP addresses of your node e.g. you should be able to ping mentioned 10.1.2.3 from your Pods.
As it doesn't require any loadbalancing you can reach to your Postgresql directly from your Pods without a need of configuring additional Service, that exposes it to your cluster.
If you don't want to refer to your Postgresql instance in your application using it's IP address, you can edit your Deployment (which manages the set of Pods that connect to your postgres db) to modify the default content of /etc/hosts file used by your Pods.
Edit your app Deployment by running:
microk8s.kubectl edit deployment your-app
and add the following section under Pod template spec:
hostAliases: # it should be on the same indentation level as "containers:"
- hostnames:
- postgres
- postgresql
ip: 10.1.2.3
After saving it, all your Pods managed by this Deployment will be recreated according to the new specification. When you exec into your Pod by running:
microk8s.kubectl exec -ti pod-name -- /bin/bash
you should see additional section in your /etc/hosts file:
# Entries added by HostAliases.
10.1.2.3 postgres postgresql
Since now you can refer to your Postgres instance in your app by names postgres:5432 or postgresql:5432 and it will be resolved to your VM's IP address.
I hope it helps.
UPDATE:
I almost forgot that some time ago I've posted an answer on a very similar topic. You can find it here. It describes the usage of a Service without selector, which is basically what you mentioned in your question. And yes, it also can be used for configuring access to your Postgresql instance running on the same host. As this kind of Service doesn't have selectors by its definition, no endpoint is automatically created by kubernetes and you need to create one by yourself. Once you have the IP address of your Postgres instance (in our example it is 10.1.2.3) you can use it in your endpoint definition.
Once you configure everything on the side of kubernetes you may still encounter an issue with Postgres. In your Pod that is trying to connect to the Postgres instance you may see the following error message:
org.postgresql.util.PSQLException: FATAL: no pg_hba.conf entry for host 10.1.7.151
It basically means that your pg_hba.conf file lacks the required entry that would allow your Pod to access your Postgresql database. Authentication is host-based, so in other words only hosts with certain IPs or with IPs within certain IP range are allowed to authenticate.
Client authentication is controlled by a configuration file, which
traditionally is named pg_hba.conf and is stored in the database
cluster's data directory. (HBA stands for host-based authentication.)
So now you probably wonder which network you should allow in your pg_hba.conf. To handle cluster networking Microk8s uses flannel. Take a look at the content of your /var/snap/microk8s/common/run/flannel/subnet.env file. Mine looks as follows:
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.53.1/24
FLANNEL_MTU=1410
FLANNEL_IPMASQ=false
Adding to your pg_hba.conf only flannel subnet should be enough to ensure that all your Pods can connect to Posgresql.

Connect gitlab to kubernetes cluster hosted in rancher

I am trying to connect to a rancher2 kubernetes cluster from gitlab. My kube config looks like
apiVersion: v1
kind: Config
clusters:
- name: "k8s"
cluster:
server: "https://..."
- name: "k8s-desktop"
cluster:
server: "https://192.168.0.2:6443"
certificate-authority-data: ...
I need to point gitlab to the name.cluster.server value being https://192.168.0.2:6443, this is an internal IP. How can I override this value in kube config using my external IP so gitlab is able to connect?
When you log into Rancher, you can get the kubeconfig file. This will use the Rancher url on port 443. Your kubeconfig seems to be pointing directly to your k8s node, as the kubeconfig you obtain when using RKE.
If by external ip, you mean connect from outside, then you need a device capable of port forwarding. Please clarify what you mean by internal / external ip.
From my side, i have no problem to give gitlab the Rancher url in order to connect to k8s. Rancher will proxy the connection to the k8s cluster.
I dont see any reasons to change you server ip to External.
What you should do is create port forwarding from internal https://192.168.0.2:6443 to your external ip. And then use External url with port forwarded port in Gitlab Kubernetes API URL.

How to expose the API server of a minikube cluster to the public network (LAN)?

Is there a way to expose the API server of a Kubernetes cluster created with minikube on a public network interface to the LAN?
minikube start --help talks about this option (and two similar ones):
--apiserver-ips ipSlice \
A set of apiserver IP Addresses which are used in the generated \
certificate for localkube/kubernetes. This can be used if you \
want to make the apiserver available from outside the machine (default [])
So it seems to be possible. But I can't figure out how or find any further information on that.
I naively tried:
minikube start --apiserver-ips <ip-address-of-my-lan-interface>
But that just yields an utterly dysfunctional minikube cluster that I can't even access from localhost.
Following the advise in one answer below I added port forwarding to Kubernetes like this:
vboxmanage controlvm "minikube" natpf1 "minikube-api-service,tcp,,8443,,8443"
And then I can actually access the API server from a different host on the network with:
curl --insecure https://<ip-address-of-host-running-minikube>:8443
But the response is:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
There are two problems with this:
I have to use --insecure for the curl call, otherwise I get a SSL validation error.
I get a response, but the response is just telling me that I'm not allowed to use the API...
The big source of your headache is that minikube runs in a VM (usually) with it's own IP address. For security, it generates some self signed certificates and configures the api command line tool, kubectl to use them. The certs are self signed with the VM IP as the host name for the cert.
You can see this if you use kubectl config view. Here's mine:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/sam/.minikube/ca.crt
server: https://192.168.39.226:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /home/sam/.minikube/client.crt
client-key: /home/sam/.minikube/client.key
Let's unpack that.
server: https://192.168.39.226:8443 - this tells kubectl where the server is. In a vanilla minikube setup, it's https://<ip-of-the-vm>:8443. Note that its https.
certificate-authority: /home/sam/.minikube/ca.crt - this line tells the tool what certificate authority file to use to verify the TLS cert. Because it's a self signed cert, even in vanilla setups, you'd have to either inform curl about the certificate authority file or use --insecure.
- name: minikube
user:
client-certificate: /home/sam/.minikube/client.crt
client-key: /home/sam/.minikube/client.key
This chunk specifies what user to authenticate as when making commands - that's why you get the unauthorized message even after use --insecure.
So to use the minikube cluster from a different IP you'll need to:
1) Use the --apiserver-ips <target-ip-here> (so the cert minikube generates is for the correct IP you'll be accessing it from)
2) Forward the 8443 port from the minikube vm to make it available at <target-ip-here>:8443
3) Publish or otherwise make available the cert files referenced from kubectl config view
4) Setup your kubectl config to mimic your local kubectl config, using the new ip and referencing the published cert files.
You need to forward some ports on your LAN interface to the VMs where Kubernetes are running. That will work for any service inside Minikube, not only for Kubernetes itself.
In short, if you are using VirtualBox as a VM driver, you should:
Find the port on the VM where your service binds. Commands kubectl describe <servicename> and minikube service <servicename> --url should help you.
Forward ports to the VM using vboxmanage tool:
vboxmanage controlvm "minikube" natpf1 "http,tcp,,12345,,80"
Where minikube - the name of the VM, natfp1 - the virtual interface of the VM, 12345 - port of the VM, 80 - local port.
I was looking for something to access my minikube cluster publicly, from the vault As vault need to the cluster should be reachable, posting as an answer might help some one else.
, although I did not check it locally this approach but working fine on Ec2.
minikube start --apiserver-ips=13.55.145.00 --vm-driver=none
Where this should --apiserver-ips=13.55.145.00 be the public IP of your ec2 instance.
--vm-driver or --driver string
Used to specify the driver to run Kubernetes in. The list of available drivers depends on the operating system.
this should be none as Ec2 is already a virtual machine.
You might not able to see the IP in kubectl cluster-info response, but the minikube creates a certificate for the IP that is passed.
Check further detail
https://github.com/Adiii717/vault-sidecar-injector-app/blob/main/README.md#vault-enterprise-with-minikube-ec2
Vault sidecar injector permission denied only for vault enterprise