Unable to access application through minikube tunnel - kubernetes

I'm currently using minikube and I'm trying to access my application by utilizing the minikube tunnel since the service type is LoadBalancer.
I'm able to obtain an external IP when I execute the minikube tunnel, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.
To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed kubectl port-forward and I was able to access my application through localhost.
Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?

Your service is probably bound to localhost. Minikube starts the cluster in a VM or docker (depending on the driver you are using) that is bound to an external IP, $(minikube ip).
When you are running a minikube tunnel you're tunneling from minikube cluster external IP to the internal IP of the load balancer, the LB service in Kubernete the External IP goes from "Pending" to an actual internal IP and something like this should work:
curl -H 'Host: localhost' -v $(minikube ip)
However, it doesn't in the browser, since in the above command you are sending the request to the minikube's IP, not localhost. What I do for this to work is a ssh tunnel like this one:
ssh -i $(minikube ssh-key) docker#$(minikube ip) -L 8008:localhost:80
This maps the LB listener in port 80, in minikube's cluster, to 8008 in localhost. The external IP of the service remains pending but it works since the Kube controller can still find it. If you want to map port 80 then you will need to add sudo.

If the version of ssh on your system (the one in your path) is less than 8.0, 'minikube tunnel' will silently fail to instantiate the ssh tunnel for some port forwards. (e.g. privileged ports)
Open a command prompt as administrator, and type 'where.exe ssh'. Navigate to that location in windows explorer, and right-click on 'ssh.exe'. Choose Properties->Details to see the version.
If this is less than version 8.0 you must upgrade that to at least version 8.0 to prevent this silent failure of ssh by 'minikube tunnel'.
After upgrading, ssh, ensure that the newer version is the one that will be executed by using the 'where.exe' command again. If there are two on your system, then reorder the paths in your path environment variable. Restart your shell (or better) reboot the system so that all processes environments pick up the path changes.
Then try 'minikube tunnel' again. When it is working, you should see an ssh instance in the task manager for each tunnel that minikube creates.

In my case minikube service <serviceName> solved this issue.
For further details look here in minikube docs.

Related

Run a K3S server in a docker container, and connect a K3S agent in another docker container

I know k3d can do this magically via k3d cluster create myname --token MYTOKEN --agents 1, but I am trying to figure out how to do the most simple version of that 'manually'. I want to create a server something like:
docker run -e K3S_TOKEN=MYTOKEN rancher/k3s:latest server
And connect an agent something like like:
docker run -e K3S_TOKEN=MYTOKEN -e K3S_URL=https://localhost:6443 rancher/k3s:latest agent
Does anyone know what ports need to be forwarded here? How can I set this up? Nearly everything I try, the agent complains about port 6444 already in use, even if I disable as much as possible about the server with any combination of --no-deploy servicelb --disable-agent --no-deploy traefik
Feel free to disable literally everything other than the server and the agent, I'm trying to make this ultra ultra simple, but just butting my head against a wall at the moment. Thanks!
The containers must "see" each other. Docker isolates the networks by default, so "localhost" in your agent container is the agent container itself.
Possible solutions:
Run both containers without network isolation using --net=host, map API port of the server to the host with --port and use the host IP in the agent container or use docker-compose.
A working example for docker-compose is described here: https://www.trion.de/news/2019/08/28/kubernetes-in-docker-mit-k3s.html

How to set minikube proxy when the driver is hyperkit or virtualbox?

I am trying to use Ingress in minikube by minikube addons enable ingress. However, currently Ingress cannot be used with minikube when the driver is docker on macOS based on this issue ticket.
So I turn to use hyperkit or virtualbox as driver. One image that need to be pulled when enabling Ingress is k8s.gcr.io/ingress-nginx/controller:v0.44.0. However, k8s.gcr.io is blocked in my current location.
So I try to use a VPN in global mode for my computer. However, I met this issue that hyperkit is unable to access k8s.gcr.io when the VPN is in use.
Then I found this document
https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
My VPN is listening at 127.0.0.1:1087, I set
export HTTP_PROXY=http://127.0.0.1:1087
export HTTPS_PROXY=https://127.0.0.1:1087
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
Then I tried all these methods to start minikube:
minikube start --driver=hyperkit
minikube start --driver=virtualbox
minikube start --driver=hyperkit --docker-env HTTP_PROXY=http://127.0.0.1:1087 --docker-env HTTPS_PROXY=https://127.0.0.1:1087 --docker-env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24
But I saw these messages:
😄 minikube v1.21.0 on Darwin 11.2.3
✨ Using the hyperkit driver based on user configuration
❗ Local proxy ignored: not passing HTTP_PROXY=http://127.0.0.1:1087 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=https://127.0.0.1:1087 to docker env.
👍 Starting control plane node minikube in cluster minikube
🔥 Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
❗ Local proxy ignored: not passing HTTP_PROXY=http://127.0.0.1:1087 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=https://127.0.0.1:1087 to docker env.
and
😄 minikube v1.21.0 on Darwin 11.2.3
✨ Using the virtualbox driver based on existing profile
❗ Local proxy ignored: not passing HTTP_PROXY=http://127.0.0.1:1087 to docker env.
❗ Local proxy ignored: not passing HTTPS_PROXY=https://127.0.0.1:1087 to docker env.
Seems this "user configuration" overwrite my proxy config. But where is this "user configuration"?
What is the correct way to set proxy for minikube when the drive hyperkit or virtualbox? Thanks!
My guess is 127.0.0.1 conflicts with the VM's internal 127.0.0.1 address, and that's why it's ignored. You might need to configure your proxy to be your host's network IP instead of 127.0.0.1? You might not even need to configure a proxy? Also, the Virtualbox driver gives me problems with VPN. I have the best luck with the VMware driver, and can also get the HyperKit driver to work if I update the VM's DNS to my host's DNS.
minikube start --driver hyperkit
minikube ssh sudo resolvectl dns eth0 192.168.0.53
minikube ssh sudo resolvectl domain eth0 example.com
I also get the unable to access k8s.gcr.io error when creating the VM, but it doesn't seem to affect things.
Downloading this image using docker, exporting it to file, transfering it to minikube VM and importing it to local docker registry, like in this thread has solved the problem.
Your proxy is for circumventing the China Greate Firewall correct? Then I know why it is not working. It is not releated to hyperkit or virutalbox at all.
I checked the source code of minikube. "Local proxy ignored" actually means that your proxy url is set to localhost (127.0.*) and minikube thinks you set the proxy incorrectly so it will just ignore this setting.
The resolution is just to edit your host file (for Windows it is in C:\Windows\System32\drivers\etc\hosts), to give 127.0.0.1 a hostname. You can add the following line into the end of the host file.
127.0.0.1 localproxy
Then change environment variable http_proxy and https_proxy to http://localproxy:1235.
Reopen the CMD window to get the updated environment variable and restart the minikue. You should be able to find that the "Local proxy ignored" message is gone and finally you can download the image from gcr.io.

kubectl proxy within ubuntu in WSL windows 10

I´m running windows 10 with WSL1 and ubuntu as distrib.
My windows version is Version 1903 (Build 18362.418)
I´m trying to connect to kubernetes using kubectl proxy within ubuntu WSL. I get a connection refused when trying to connect to the dashboard with my browser.
I have checked in windows with netstat -a to see active connections.
If i run kubectl within the windows terminal i have no problem to connect to kubernetes, so the problem is only happening when i try to connect with ubuntu WSL1.
I have also tried to run the following command
kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*'
... but the connection is refused although i see that windows is listening to the port. Changing port to another port didn´t fix the proble. Disabling the firewall didnt´fix the problem as well.
Any idea ?
First thing to do would be to check if you able to safely talk to your cluster: (kubectl get svc -n kube-system, kubectl cluster-info)
If not check if $HOME/.kube folder was created. If not, run:
gcloud container clusters get-credentials default --region=<your_region>

Kubernetes ssh into nodes not working in local

How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.

How to make a TCP outgoing connection with Docker container?

My Go application makes TLS connections via tls.Dial() to exchange data.
It works fine when run from the host:
But the outgoing connection doesn't seem to work when the app is run from a Docker container. The app hangs indefinitely.
Note 1: Same behavior with using docker run -p $(docker-machine ip):2500:2500 ...
Note 2: VM doesn't have extra port forwarding settings other than the default settings that came with docker-machine's default VM.
Docker image build with Dockerfile:
FROM golang:latest
RUN mkdir -p "$GOPATH/src/path/to/app"
# Install dependencies
RUN go get github.com/path/to/dep
VOLUME "$GOPATH/src/path/to/app"
EXPOSE 2500
WORKDIR "$GOPATH/src/path/to/app"
CMD ["go", "run", "main.go"]
Host is OS X running docker-machine.
Question
How can I make the TCP outgoing connection to work?
You are either using boot2docker or docker-machine (since you are running docker on OSX). If you are using boot2docker, you have to forward the ports on VirtualBox as well as docker, have a look at this blog post:
https://fogstack.wordpress.com/2014/02/09/docker-on-osx-port-forwarding/
If you are using docker-machine, you have to connect to the docker-machine assigned ip, not localhost, have a look at this post:
https://github.com/docker/machine/issues/710
I see now that you are using docker-machine specifically, so the post about docker-machine should answer your question.
Edit: I misunderstood the question. You are trying to make an outgoing connection on a forwarded port. That is not correct. By default docker can make outgoing connections on any port. The port forwarding is for incoming connections only. Please try again without specifying any ports to forward. My suspicion is that you are trying to make an outgoing connection on the incoming (forwarded) port.
I've just had exactly the same problem. Was unable to connect out at all.
Restarted the container, and suddenly outgoing connections worked fine. It's possible that the container survived an update of docker?
Currently using Docker version 18.09.3, build 774a1f4