I have a local server I'm running that I'm trying to send requests to from a pod running in a single node local minikube cluster, but I'm getting connection refused. I can curl the service locally and it works find. What can I do to allow outbound connections to hit my localserver? I do minikube ssh and I can curl google.com or example.com fine.
I found from their posts on Github that
host.minikube.internal is a new variable usable to access the host machine. Curling this using ssh minikube proves access.
Related
I have two minikube clusters(two separate profiles) running locally call it minikube cluster A and minikube Cluster B. Each of these cluster also have an ingress and a dns associated with it locally. The dns are hello.dnsa and hello.dnsb . I am able to do ping on both of them and nslookup just like this https://minikube.sigs.k8s.io/docs/handbook/addons/ingress-dns/#testing
I want pod A in cluster A to be able to communicate with pod B in cluster B. How can I do that? I logged into pod A cluster A and I did telnet hello.dnsb 80 and it doesn't get connected because I suspect there is no route. similarly I logged into pod B of cluster B and did telnet hello.dnsb 80 and it doesnt get connected. However If I do telnet hello.dnsb 80 or telnet hello.dnsb 80 from my host machine, telnet works!
Any simple way to solve this problem for now? I am ok with any solution like even adding routes manually using ip route add if needed
Skupper is a plugin available for performing these actions. It is a service interconnect that facilitates secured communication between the clusters, for more information on skupper go through this documentation.
There are multiple examples in which minikube is integrated with skupper, go through this configuration documentation for more details.
I tried to install kubernetes with Docker desktop. However, as soon as I type in
kubectl get nodes
I get Remote kubernetes server unreachable error.
I0217 23:42:56.224000 26220 versioner.go:56] Remote kubernetes server unreachable
Unable to connect to the server: dial tcp 172.28.112.98:6443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
Any ideas on how to fix this?
Have you got more than one context in your kubeconfig?
You can check this with kubectl config get-contexts.
If necessary change your context to Docker Desktop Kubernetes using kubectl config use-context docker-desktop.
Is it possible that you may have tried minikube and this has left cluster/context in your .kube\config?
Configure Access to Multiple Clusters
go to your HOME/.kube directory and check the config file. There is a possibility that the server mentioned there is old or not reachable.
you can copy the new config file(from remote server of the other directory of tools like k3s) and add/replace it in your HOME/.kube / config file.
Same error message may also happen when you switch from local k8s cluster to a remote cluster that requires VPN to connect and you are not connected to VPN.
I have a server running on ubuntu where I need to expose my app using kubernetes tools. I created a cluster using minikube with virtualbox machine and with the command kubectl expose deployment I was able tu expose my app... but only in my local network. It's mean that when I run minikube ip I receive a local ip. My question is how can I access my minikube machine from outside ?
I think the answer will be "port-forwarding".But how can I do that ?
You can use SSH port forwarding to access your services from host machine in the following way:
ssh -R 30000:127.0.0.1:8001 $USER#192.168.0.20
In which 8001 is port on which your service is exposed, 192.168.0.20 is minikube IP.
Now you'll be able to access your application from your laptop pointing the browser to http://192.168.0.20:30000
If you mean to access your machine from the internet, then the answer is yes "port-forwarding" and use the external ip address [https://www.whatismyip.com/]. The configurations go into your router settings. Check your router manual.
I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy bastion machine which allow my to connect into my private network (vpc) from the internet. This is the tutorial I based on. SSH into bastion - working currently.
The kubernetes master is not exposed outside. The result:
$ kubectl get pods
The connection to the server 172.16.0.2 was refused - did you specify the right host or port?
So i install kubectl on bastion and run:
$ kubectl proxy --port 1111
Starting to serve on 127.0.0.1:3128
now I want to connect my local kubectl to the remote proxy server. I installed secured tunnel to the bastion server and mapped the remote port into the local port. Also tried it with CURL and it's working.
Now I looking for something like
$ kubectl --use-proxy=1111 get pods
(Make my local kubectl pass tru my remote proxy)
How to do it?
kubectl proxy acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:
kubectl --server=http://localhost:1111
(Where port 1111 on your local machine is where kubectl proxy is available; in your case trough a tunnel)
If you need exec or attach trough kubectl proxy you'll need to run it with either --disable-filter=true or --reject-paths='^$'. Read the fine print and consequences for those options.
Safer way
All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:
ssh -D 1080 bastion
That makes ssh act as SOCKS proxy. You need GatewayPorts yes in your sshd_config for this to work. Thereafter from the workstation I can use
HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod
I have setup kubernetes cluster in my ubuntu machine, before it was working
then I have restart the machine, it's not working now properly.
I am getting the following error.
root#master:~# kubectl get nodes
The connection to the server 192...*:6443 was refused - did you specify the right host or port?
root#master:~#
Can you explain how you installed kubernetes? For instance are you using minikube?
perhaps you just need to start minikube?
minikube start
This takes a few minutes to start up so be patient
An alternative cause is that your IP may have changed. does ip addr match the ip its trying to connect to. If it doesn't you may need to edit your ~/kube/config file
Is anything listening on port 6443?
netstat -pant | grep 6443`