Unable to reach the application using Ingress in Minikube (macOS) - kubernetes

I am trying to follow this tutorial to setup Minikube on my MacBook.
Minikube with ingress
I have also referred Stack overflow question and Stack over flow question 2 both of these are not working for me.
When I run Minikube tunnel it says to enter the password and then get stuck after entering my password.
sidharth#Sidharths-MacBook-Air helm % minikube tunnel
βœ… Tunnel successfully started
πŸ“Œ NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
❗ The service/ingress example-ingress requires privileged ports to be exposed: [80 443]
πŸ”‘ sudo permission will be asked for it.
❗ The service/ingress minimal-ingress requires privileged ports to be exposed [80 443]
πŸƒ Starting tunnel for service example-ingress.
πŸ”‘ sudo permission will be asked for it.
πŸƒ Starting tunnel for service minimal-ingress.
Password:
I am getting the below response when I run kubectl ge ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info localhost 80 34m

This is an issue specifically with the docker driver, and it's only an output issue. If you use a VM driver (like hyperkit for macOS), you'll get the expected output in the documentation.
This stems from the fact that we need to do two discrete things to tunnel for a container driver (since it needs to route to 127.0.0.1) and for a VM driver.
We can potentially look into fixing this so that the output for both versions are similar, but the tunnel itself is working fine.
Refer this github link for more information.

On the Mac's with M1 chips, I can use minikube with podman (can brew install both). The "minikube tunnel" will ask for password and then appear to hang, because it's essentially establishing a tunnel similar to "kubectl port-forward" with that process between 127.0.0.1:443 and your ingress, and that tunnel will disappear once you ctrl-c out of the "minikube tunnel" process.
Note, this is different from the experience/behavior you'll get if you're using minikube with virtualbox on an Intel-based Mac, which will return the list of mapped ports and always has the ingress reachable from on the virtualbox VM.

Related

How to connect to a minikube cluster created in a linux VM from windows 10 local computer?

I have created a following minikube cluster in linux machine . Now I wanted to connect to the node of the cluster from my local windows 10 machine. I have kubectl installed in the local machine. How do I connect the worker node of a minikube cluster from my windows machine? I am new to the Kubes , please let me know if any details needs to added to the question.
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 52d v1.22.3
minikube-m02 Ready worker 49m v1.22.3
minikube-m03 Ready worker 43m v1.22.3
Deploying a Nginx reverse proxy in front of a minikube can make us interact from local machines to the Virtual machine where the minikube is installed.
You can’t access minikube remotely because it’s only accessible locally. For this reason, you need to deploy an Nginx reverse proxy next to minikube that will allow receiving requests from remote clients then forward them to kube-apiserver. Kubernetes API server is a point where all your requests will go when you use the command-line tool kubectl. The kubectl allows you to run commands against Kubernetes clusters.
Refer this document for the detailed procedure of installing Nginx reverse proxy in front of a minikube.

Access Minikube Services from public IP [duplicate]

I know minikube should be used for local only, but i'd like to create a test environment for my applications.
In order to do that, I wish to expose my applications running inside the minikube cluster to external access (from any device on public internet - like a 4G smartphone).
note : I run minikube with --driver=docker
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web8080 NodePort 10.99.39.162 <none> 8080:31613/TCP 3d1h
minikube ip
192.168.49.2
One way to do it is as follows :
firewall-cmd --add-port=8081/tcp
kubectl port-forward --address 0.0.0.0 services/web8080 8081:8080
then I can access it using :
curl localhost:8081 (directly from the machine running the cluster inside a VM)
curl 192.168.x.xx:8081 (from my Mac in same network - this is the private ip of the machine running the cluster inside a VM)
curl 84.xxx.xxx.xxx:8081 (from a phone connected in 4G - this is the public ip exposed by my router)
I don't want to use this solution because kubectl port-forward is weak and need to be run every time the port-forwarding is no longer active.
How can I achieve this ?
(EDITED) - USING LOADBALANCER
when using LoadBalancer type and minikube tunnel, I can expose the service only inside the machine running the cluster.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.111.61.218 10.111.61.218 8080:31831/TCP 3d3h
curl 10.111.61.218:8080 (inside the machine running the cluster) is working
but curl 192.168.x.xx:8080 (from my Mac on same LAN) is not working
Thanks
Minikube as a development tool for a single node Kubernetes cluster provides inherent isolation layer between Kubernetes and the external devices (being specific the inbound traffic to your cluster from LAN/WAN).
Different --drivers are allowing for flexibility when it comes to the place where your Kubernetes cluster will be spawned and how it will behave network wise.
A side note (workaround)!
As your minikube already resides in a VM and uses --driver=docker you could try to use --driver=none (you will be able to curl VM_IP:NodePort from the LAN). It will spawn your Kubernetes cluster directly on the VM.
Consider checking it's documentation as there are some certain limitations/disadvantages:
Minikube.sigs.k8s.io: Docs: Drivers: None
As this setup is already basing on the VM (with unknown hypervisor) and the cluster is intended to be exposed outside of your LAN, I suggest you going with the production-ready setup. This will inherently eliminate the connectivity issues you are facing. Kubernetes cluster will be provisioned directly on a VM and not in the Docker container.
Explaining the --driver=docker used: It will spawn a container on a host system with Kubernetes inside of it. Inside of this container, Docker will be used once again to spawn the necessary Pods to run the Kubernetes cluster.
As for the tools to provision your Kubernetes cluster you will need to chose the option that suits your needs the most. Some of them are the following:
Kubeadm
Kubespray
MicroK8S
After you created your Kubernetes cluster on a VM you could forward the traffic from your router directly to your VM.
Additional resources that you might find useful:
Stackoverflow.com: Questions Expose Kubernetes cluster to the Internet (Virtualbox with minikube)
curl $(minikube ip):$NODE_PORT : Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port.
For you : curl 192.168.49.2:31613
Use nginx reverse-proxy
https://www.zepworks.com/posts/access-minikube-remotely-kvm/
install nginx, then in nginx.conf add this
stream {
server {
listen 8081;
proxy_pass 192.168.49.2:8080;
}
}
restart nginx
One way that I use to get around the fact that the process of kubectl port-forward stops after a while is to create a detach session using tmux following this. With that, I haven't had any problems with the exact same Minikube cluster configuration that you have.

How to expose Minikube cluster to internet

I know minikube should be used for local only, but i'd like to create a test environment for my applications.
In order to do that, I wish to expose my applications running inside the minikube cluster to external access (from any device on public internet - like a 4G smartphone).
note : I run minikube with --driver=docker
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web8080 NodePort 10.99.39.162 <none> 8080:31613/TCP 3d1h
minikube ip
192.168.49.2
One way to do it is as follows :
firewall-cmd --add-port=8081/tcp
kubectl port-forward --address 0.0.0.0 services/web8080 8081:8080
then I can access it using :
curl localhost:8081 (directly from the machine running the cluster inside a VM)
curl 192.168.x.xx:8081 (from my Mac in same network - this is the private ip of the machine running the cluster inside a VM)
curl 84.xxx.xxx.xxx:8081 (from a phone connected in 4G - this is the public ip exposed by my router)
I don't want to use this solution because kubectl port-forward is weak and need to be run every time the port-forwarding is no longer active.
How can I achieve this ?
(EDITED) - USING LOADBALANCER
when using LoadBalancer type and minikube tunnel, I can expose the service only inside the machine running the cluster.
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.111.61.218 10.111.61.218 8080:31831/TCP 3d3h
curl 10.111.61.218:8080 (inside the machine running the cluster) is working
but curl 192.168.x.xx:8080 (from my Mac on same LAN) is not working
Thanks
Minikube as a development tool for a single node Kubernetes cluster provides inherent isolation layer between Kubernetes and the external devices (being specific the inbound traffic to your cluster from LAN/WAN).
Different --drivers are allowing for flexibility when it comes to the place where your Kubernetes cluster will be spawned and how it will behave network wise.
A side note (workaround)!
As your minikube already resides in a VM and uses --driver=docker you could try to use --driver=none (you will be able to curl VM_IP:NodePort from the LAN). It will spawn your Kubernetes cluster directly on the VM.
Consider checking it's documentation as there are some certain limitations/disadvantages:
Minikube.sigs.k8s.io: Docs: Drivers: None
As this setup is already basing on the VM (with unknown hypervisor) and the cluster is intended to be exposed outside of your LAN, I suggest you going with the production-ready setup. This will inherently eliminate the connectivity issues you are facing. Kubernetes cluster will be provisioned directly on a VM and not in the Docker container.
Explaining the --driver=docker used: It will spawn a container on a host system with Kubernetes inside of it. Inside of this container, Docker will be used once again to spawn the necessary Pods to run the Kubernetes cluster.
As for the tools to provision your Kubernetes cluster you will need to chose the option that suits your needs the most. Some of them are the following:
Kubeadm
Kubespray
MicroK8S
After you created your Kubernetes cluster on a VM you could forward the traffic from your router directly to your VM.
Additional resources that you might find useful:
Stackoverflow.com: Questions Expose Kubernetes cluster to the Internet (Virtualbox with minikube)
curl $(minikube ip):$NODE_PORT : Now we can test that the app is exposed outside of the cluster using curl, the IP of the Node and the externally exposed port.
For you : curl 192.168.49.2:31613
Use nginx reverse-proxy
https://www.zepworks.com/posts/access-minikube-remotely-kvm/
install nginx, then in nginx.conf add this
stream {
server {
listen 8081;
proxy_pass 192.168.49.2:8080;
}
}
restart nginx
One way that I use to get around the fact that the process of kubectl port-forward stops after a while is to create a detach session using tmux following this. With that, I haven't had any problems with the exact same Minikube cluster configuration that you have.

minikube ip is not reachable

I have created one service called fleetman-webapp:
apiVersion: v1
kind: Service
metadata:
name: fleetman-webapp
spec:
selector:
app: webapp
ports:
- name: http
port: 80
nodePort: 30080
type: NodePort
also, a pod named webapp:
apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: richardchesterwood/k8s-fleetman-webapp-angular:release0
I have checked the minikube ip:
192.168.99.102
But when I type in the browser 192.168.99.102:30080, the webapp is not reachable:
Please note that I use Ubuntu latest version. I have verified furthermore if proxies and firewalls are active:
cat /etc/environment:
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
iptables -L:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
I have also disabled ufw in Ubuntu, but no success, the url 192.168.99.102:30080 .
Would you help me please ? thanks in advance for your answer.
Even though, you are exposing port 30080 via NodePort in minikube, minikube will still not expose it because it will use its own external port to listen to this service. Minikube tunnels the service to expose to the outer world. To find out that exposed port:
minikube service $SERVICE_NAME
so, in your case
minikube service fleetman-webapp
There are a lot of different hypervisors which can work with minikube. Choosing one will be highly dependent on variables like operating system. Some of them are:
Virtualbox
Hyper-V
VMware Fusion
KVM2
Hyperkit
"Docker (--vm-driver=none)" (see the quotes)
There is official documentation talking about it: Kubernetes.io: Minikube: Specifying the vm driver
Choosing Hypervisor will affect how the minikube will behave.
Focusing on:
Docker: --vm-driver=none
Virtualbox: --vm-driver=virtualbox
Docker
Official documentation sums it up:
Minikube also supports a --vm-driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a Linux environment but not a hypervisor.
-- Kubernetes.io: Install minikube: Install a hypervisor
The output of command$ sudo minikube ip will show IP address of a host machine.
Service object type of NodePort will be available with IP_ADDRESS_OF_HOST:NODEPORT_PORT.
Following with command: $ kubectl get nodes -o wide:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
K8S Ready master 95s v1.17.3 192.168.0.114 <none> Ubuntu 18.04.4 LTS 5.3.0-28-generic docker://19.3.8
Please take a specific look on:
INTERNAL-IP
192.168.0.114
It's the same IP address as a host it's working on. You can (for example) curl pods without any restrictions. Please consider reading the article in included citing:
Caution: The none VM driver can result in security and data loss issues. Before using --vm-driver=none, consult this documentation for more information.
You can check what was exposed with command:
$ sudo netstat -tulpn
Virtualbox
Creating a minikube instance with --vm-driver=virtualbox will create a virtual machine with Virtualbox as host.
Virtual machine created with this kind of --vm-driver will have 2 network interfaces provided below:
NAT
Host-only adapter
What is important is that your minikube instance will be accessible by Host-only adapter.
Host-only networking. This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface, similar to a loopback interface, is created on the host, providing connectivity among virtual machines and the host.
-- Virtualbox.org: Virtual networking
For example:
minikube host-only adapter will have an address: 192.168.99.103
Your host-only adapter will have an address: 192.168.99.1
They must be different!
If you are having issues with connecting to this adapter please check:
If minikube's host-only adapter address is responding to ping when minikube start completed successfully.
Your host-only adapter is present in your network configuration by issuing either:
ip a
ifconfig
Your host-only adapter address is in range of your minikube instance (subnet)
From my experience reboot/recreation of this adapter worked all the time if something wasn't right.
The output of command$ sudo minikube ip will show IP address of a Host-only adapter.
Following with command: $ kubectl get nodes -o wide:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
m01 Ready master 29m v1.17.3 192.168.99.103 <none> Buildroot 2019.02.9 4.19.94 docker://19.3.6
Please take a specific look once more on INTERNAL-IP and ip address associated with it.
Service object type of NodePort will be available with:
IP_ADDRESS_OF_HOST_ONLY_ADAPTER:NODEPORT_PORT.
I recreated your Deployment and Service attached to it and it worked in both --vm-driver=none and --vm-driver=virtualbox cases.
Please let me know if you have any questions in this topic.
I have had the same issue and have been trying to solve that for the last 2 days I have tried to install ingress addon:
minikube addons enable ingress
and also tried to run :
minikube tunnel
looked for a way to allow the host machine to access the container IP address but apparently couldn't find a way to do that.
If you run minikube on docker:
minikube start --driver=docker
you won't be able to access the minikube IP from your host machine since the minikube container's IP address would by accessible through DockerDesktopVM but not from your host machine.
You could run minikube on another driver such as VirtualBox or Hyperv, and that might help.
minikube start --driver=hyperv
minikube start --driver=virtualbox
Read more about the minikube drivers
In fact, that's really annoying if you don't have enough resources on your computer to run both the Docker desktop VM and the minikube VM at the same time and will eventually slow down your computer.
To solve that docker-desktop UI for Mac and Windows provides an easier alternative compared to minikube, which you could simply activate the Kubernetes feature on your docker-desktop UI:
once it is setup you can right click on the docker desktop icon > Kubernetes
To verify now that your deployement/service works properly:
kubectl apply -f /file.yaml
For this specific (and really great) course about Kubernetes on Udemy from Richard Chesterwood the following solution should work out of the box on Windows: just start the minikube with hyper-v driver, then it will automatically map all the ports you are expecting onto your host machine, like detailed explained here by Dawid Kruk. Therefore all you need to start minikube "correctly" is the following command:
minikube start --driver=hyperv
Be careful by specifying exact amount of memory you give to this minikube instance. In my experiences hyper-v is a bit sensitive about how much memory you will give to it, what could result in errors:
minikube start --driver=hyperv --memory=8192
...
Not enough memory in the system to start the virtual machine minikube.
Could not initialize memory: Not enough memory resources are available to complete this operation. (0x8007000E).
'minikube' failed to start. (Virtual machine ID D4BC7B61-4E4D-4079-94DE-...)
Not enough memory in the system to start the virtual machine minikube with ram size 8192 megabytes. (Virtual machine ID ...)
Therefore just use the unspecific command given above and hyper-v will figure out, how much memory it really needs on it's own.
If you are running minikube in a Windows, then minikube must run as an Administrator command prompt window.

Openshift: Expose postgresql remotely

I've create a postgresql instance into my openshift origin v3. It's running correctly, however I don't quite figure out why I am not able to reach it remotely.
I've exposed a route:
$oc get routes
postgresql postgresql-ra-sec.192.168.99.100.nip.io postgresql postgresql None
$ oc get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgresql ClusterIP 172.30.59.113 <none> 5432/TCP 57m
This is my route:
I'm trying to get access to this instance from an ubuntu os. I'm trying to get access using psql:
$ psql --host=postgresql-ra-sec.192.168.99.100.nip.io --dbname=tdevhub
psql: could not connect to server: Connection refused
Is the server running on host "postgresql-ra-sec.192.168.99.100.nip.io" (192.168.99.100) and accepting
TCP/IP connections on port 5432?
Otherwise:
$ psql --host=postgresql-ra-sec.192.168.99.100.nip.io --port=80 --dbname=tdevhub
psql: received invalid response to SSL negotiation: H
I've checked dns resolution, and it seems to work correctly:
$ nslookup postgresql-ra-sec.192.168.99.100.nip.io
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: postgresql-ra-sec.192.168.99.100.nip.io
Address: 192.168.99.100
EDIT
What about this?
Why is there this redirection? Could I try to change it before port-forwarding?
Exposing a service via a route means that your enabling external HTTP traffic. For a service like Postgresql, this is not going to work as per your example.
An alternative is to port forward to your local machine and connect that way. So for example, run oc get pods and then oc port-forward <postgresql-pod-name> 5432, this will allow you to create the TCP connection:
Run psql --host=localhost --dbname=tdevhub on the host machine to verify this.
There is also the option, in some instances at least to assign external IP's to allow ingress traffic. See the OpenShift docs. This will be more complicated to achieve but a permanent solution as opposed to port forwarding. It looks like you are running oc cluster up or minishift however so not sure how viable this is.
In theory while the answer of the port forwarding is correct and the only way I made it work I would say that in Openshift 3.x you could use a tcp route for this https://documentation.its.umich.edu/node/2126
However it does not seem to work (at least for me) in Openshift 4.x
Also I don't personally like the port forwarding because this assumes you have to establish a connection with a user that can connect to the cluster and has permissions with namespace to do what it needs to do.
I would much rather suggest the ingress solution
https://docs.openshift.com/container-platform/4.6/networking/configuring_ingress_cluster_traffic/configuring-externalip.html