Ingress Niginx on Multi-Node Virtualbox Driver Minikube - kubernetes

I am following this tutorial for setting up Ingress with Ingress-Nginx on Minikube. But I can't seem to get it to work. I get a connection refused when I try to connect to port 80 on the VM IP address returned by minikube ip
My setup is this:
Minikube version: v1.25.1
VirtualBox version: 6.1
Kubernetes version: v1.22.5
The ingress-nginx namespace has the below resources:
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-85f4c5b458-2dhqh 1/1 Running 0 49m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.102.88.109 <none> 80:30551/TCP,443:31918/TCP 20h
service/ingress-nginx-controller-admission ClusterIP 10.103.134.39 <none> 443/TCP 20h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 20h
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-85f4c5b458 1 1 1 20h
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 6s 20h
job.batch/ingress-nginx-admission-patch 1/1 6s 20h
The default namespace has the below resources
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/web-79d88c97d6-rvp2r 1/1 Running 0 47m 10.244.1.4 minikube-m02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h <none>
service/web NodePort 10.104.20.14 <none> 8080:31613/TCP 20h app=web
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/example-ingress nginx hello-world.info localhost 80 20h
Minikube is exposing these services:
|---------------|------------------------------------|--------------|-----------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|---------------|------------------------------------|--------------|-----------------------------|
| default | kubernetes | No node port |
| default | web | 8080 | http://192.168.59.106:31613 |
| ingress-nginx | ingress-nginx-controller | http/80 | http://192.168.59.106:30551 |
| | | https/443 | http://192.168.59.106:31918 |
| ingress-nginx | ingress-nginx-controller-admission | No node port |
| kube-system | kube-dns | No node port |
| kube-system | registry | No node port |
|---------------|------------------------------------|--------------|-----------------------------|
In step 4 of the Create an Ingress section The tutorial mentions this:
Add the following line to the bottom of the /etc/hosts file on your computer (you will need administrator access):
172.17.0.15 hello-world.info
Note: If you are running Minikube locally, use minikube ip to get the external IP. The IP address displayed within the ingress list will be the internal IP.
It's a three node cluster using VirtualBox. I've tried adding the Minikube ingress-nginx-controller service's IP (192.168.59.106, which is also the result of minikube ip) to my hosts file, but it doesn't work. And as far as I know, I can't include the service's node port 30551 in the hosts file to test that.
Some guidance on how to get this working would be much appreciated

You are correct. You cannot include the port in the /etc/hosts file. To get there, you would need to specify the full path in your browser or some other application as following (assuming no connectivity issues):
hello-world.info:30551
I'd recommend you to tell specifically what type of issue you have. There can be multiple issues and each one will have different solution.
For example there will be a difference between the inability to access the Service and getting the 404 message.
I'm not sure if it's related but I had connectivity issues when I created a cluster in a following way:
minikube start --driver="virtualbox"
minikube node add
minikube node add
However, when I ran below command, I encountered noone:
minikube start --driver="virtualbox" --nodes=3
Assuming that you would like to expose your Nginx Ingress controller to be available on the ports 80 and 443 instead of NodePort's you can do:
Spawn your cluster
Deploy:
Metallb.universe.tf
Configure your address pool similar to:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.59.200-192.168.59.210"
Change the Service of your: ingress-nginx-controller to LoadBalancer instead of NodePort (kubectl edit svc -n ingress-nginx ingress-nginx-controller)
Check on the Service:
kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.63.253 192.168.59.201 80:30092/TCP,443:30915/TCP 23m
Put the EXTERNAL-IP of your Ingress controller to your /etc/hosts file.
Create an Ingress resource that is matching what you've inputted as a name to /etc/hosts and has some backend.
Additional resources:
Kubernetes.github.io: Ingress nginx
Kubernetes.io: Docs: Concepts: Services networking: Service

When following the tutorial, I enabled the ingress addon after creating my cluster by running minikube addons enable ingress
This appeared to succeed, but when trying to connect to port 80 on the IP address returned by minikube ip (which is also the ingress-nginx-controller minikube service address), I got a connection refused. This can be validated by running:
nc -zv $(minikube ip) 80
However, when I enabled ingress at the time of initial cluster creation with this command:
minikube start --driver=virtualbox \
--kubernetes-version=v1.22.5 --nodes 3 \
--addons=ingress
and then ran nc -zv $(minikube ip) 80, the connection was accepted. I'm not sure if this is an issue with Minikube or with VirtualBox, but enabling ingress at the initial cluster creation time rather then subsequently worked for me
I was then able to update my hosts file with just the IP of the minikube node and the hello-world.info host
One thing that might also be worth noting if you create and delete your clusters a lot, I found sometimes when updating the hosts file on a Mac that old IPs were being cached. Running sudo dscacheutil -flushcache may help with this

Related

Unable to reach service/API from outside the cluster - Kubernetes (Metallb+HAProxy Ingress Controller)

I've created a bare-metal multi-master k8s cluster using kubekey.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 23h v1.23.10
master2 Ready control-plane,master 23h v1.23.10
master3 Ready control-plane,master 23h v1.23.10
worker1 Ready worker 23h v1.23.10
worker2 Ready worker 23h v1.23.10
worker3 Ready worker 23h v1.23.10
$ curl localhost:10249/healthz
ok
Added MetalLB load balancer and HAProxy Ingress Controller. The haproxy-controller gets the external IP address from the Metallb correctly:
$ kubectl get svc -n haproxy-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-kubernetes-ingress LoadBalancer 10.233.59.120 10.30.2.81 80:32244/TCP,443:30908/TCP,1024:32666/TCP 21h
Deployed a microservice, and exposed the service via ingress:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 23h
ms-login-http ClusterIP 10.233.3.180 <none> 80/TCP 21h
$ kubectl describe ing
Name: ms-login-http
Labels: <none>
Namespace: default
Address: 10.30.2.81
Default backend: ms-login-http:80 (10.233.103.1:8080)
Rules:
Host Path Backends
---- ---- --------
api.mydomain.in
/api/sc ms-login-http:80 (10.233.103.1:8080)
Annotations: haproxy.org/load-balance: roundrobin
haproxy.org/src-ip-header: True-Client-IP
Events: <none>
The issue is reachability of the deployed API:
[✓] Accessing the API from within any of the cluster nodes works fine
$ curl api.mydomain.in/api/sc/healthcheck
success
[✕] Same API from outside the cluster nodes fails
$ curl api.mydomain.in/api/sc/healthcheck
curl: (7) Failed to connect to api.mydomain.in port 80 after 0 ms: Connection refused
Seem to be firewall issue, but unable to narrow down for what maybe blocking the traffic. The IPTables on the master nodes has several calico forward rules. The rules list is shared in this gist.
Any direction/insight would greatly help, as I'm missing something basic here. Not faced this issue when I created a similar cluster some months back. Seems the latest version of calico has something to do with it.

How to connect from pgAdmin to Postgresql in Kubernetes/Minikube

I run a local kubernetes cluster (Minikube) and I try to connect pgAdmin to postgresql, bot run in Kubernetes.
What would be the connection string? Shall I access by service ip address or by service name?
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dbpostgresql NodePort 10.103.252.31 <none> 5432:30201/TCP 19m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d21h
pgadmin-service NodePort 10.109.58.168 <none> 80:30200/TCP 40h
kubectl get ingress:
NAME CLASS HOSTS ADDRESS PORTS AGE
pgadmin-ingress <none> * 192.168.49.2 80 40h
kubectl get pod
NAME READY STATUS RESTARTS AGE
pgadmin-5569ddf4dd-49r8f 1/1 Running 1 40h
postgres-78f4b5db97-2ngck 1/1 Running 0 23m
I have tried with 10.103.252.31:30201 but without success.
Inside the cluster, services can refer to each other by DNS based on Service object names. So in this case you would use dbpostgresql or dbpostgresql.default.svc.cluster.local as the hostname.
Remember minikube is running inside its' own container, the NodePort clusterIPs you're getting back are open inside of minikube. So to get minikube's resolution of port and ip, run: minikube service <your-service-name> --url
This will return something like http://127.0.0.1:50946 which you can use to create an external DB connection.
Another option would be to use kubectl to forward a local port to the service running on localhost ex. kubectl port-forward service/django-service 8080:80

ClusterIP not reachable within the Cluster

I'm struggling with kubernates configurations. What I want to get it's just to reach a deployment within the cluster. The cluster is on my dedicated server and I'm deploying it by using Kubeadm.
My nodes:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9d v1.19.3
k8s-worker1 Ready <none> 9d v1.19.3
I've a deployment running (nginx basic example)
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 2/2 2 2 29m
I've created a service
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d
my-service ClusterIP 10.106.109.94 <none> 80/TCP 20m
The YAML file for my service is the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx-deployment
ports:
- protocol: TCP
port: 80
Now I should expect, if I run curl 10.106.109.94:80 on my k8s-master to get the http answer.. but what I got is:
curl: (7) Failed to connect to 10.106.109.94 port 80: Connection refused
I've tried with NodePort as well and with targetPort and nodePort but the result is the same.
The cluster ip can not be reachable from outside of your cluster that means you will not get any response from the host machine that host your k8s cluster as this ip is not a part of your machine or any other machine rather than its a cluster ip which is used by your cluster CNI network like flunnel,weave.
So to get your services accessible from the outside or atleast from the host machine you have to change the type of your service like NodePort,LoadBalancer,K8s port-forward.
If you can change the service type NodePort then you will get response with any of your host machine ip and the allocated nodeport.
For example,if your k8s-master is 192.168.x.x and nodePort is 33303 then you can get response by
curl http://192.168.x.x:33303
or
curl http://worker_node_ip:33303
if your cluster is in locally installed, then you can install metalLB to get the privilege of load balancer.
You can also use port-forward to get your service accessible from the host that has kubectl client with k8s cluster access.
kubectl port-forward svc/my-service 80:80
kubectl -n namespace port-forward svc/service_name Port:Port

Access services on k8s on prem

I have 3 virtual machines (ubuntu 18 lts) on my local pc: 1 is master and 2 are nodes. I was able to install kubernetes and also to setup my application.
My application consist of 3 parts: database, backend and frontend. For each of these parts I've created and deployed services. I want to expose the FE service outside the cluster to be able to access it from one of the nodes.
The service description looks like this:
apiVersion: v1
kind: Service
metadata:
name: fe-deployment
labels:
run: fe-srv
spec:
ports:
- protocol: TCP
port: 8085
targetPort: 80
selector:
app: fe
type: NodePort
The ouput of
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
be-deployment ClusterIP 10.96.169.225 <none> 8080/TCP 2d22h app=be
db-deployment ClusterIP 10.110.14.88 <none> 3306/TCP 2d22h app=db
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
I would have expected that using one node IP and the node port to be able to access my FE from browser, but it doesn't work.
What am I missing? How to access my FE from outside the cluster?
Edit
Based on the documentation, NodePort service type should:
Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort
I understand that I will access my service from outside of the cluster using node IP and static port. From the node IP statement I understand that it refers to the machine (the VM in my case) IP.
Later Edit
I've checked the firewall and it seems that is disable on all my machines:
sudo ufw status
Status: inactive
Later later edit
As I told in a comment, trying to telnet to IPv4 address didn't work. Trying with IPv6 does work on localhost and also using the ethernet interface IPv6 IP.
The netstat output is:
netstat -6 -a | grep 324
tcp6 1 0 [::]:32476 [::]:* LISTEN
Despite the fact that it should work (based on the information I read on internet) it doesn't work with IPv4. Is there a way to change this?
Later later later edit
It seems that this is a bug
You can assign EXTERNAL-IP for fe service as IP address if node.
Then you can check : curl -k http://EXTERNAL-IP:PORT
EXTERNAL-IP is Node of IP adress Server.
In your case, due to you didn't defined nodePort, kubernetes randomly assigned port 32476 to your service. To access that service go to <EXTERNAL-NODE-IP>:32476 (kubernetes-docs).
If you want to assign specific port, you need to define nodePort in service definition (example for ingress based on nginx):
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
spec:
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
type: NodePort
You would not get an external IP when exposing service as a nodeport.
Exposing Service on a Nodeport means that your service would be available on externally via the NodeIP of any node in the cluster at a random port between 30000-32767(default behaviour) .
Each of the nodes in the cluster proxy that port (the same port number on every Node) into the pod where your service is launched.
From your kubectl get service -o wide output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
We can find that port on which your service is exposed is port 32476.
From Your kubectl get node -o wide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
We can find that your node ips are: 172.17.199.105 and 172.17.199.110
You can now access your service externally using <Node-IP>:<Node-Port>.
So in Your case these are 172.17.199.105:32476 and 172.17.199.110:32476 depending on which node you want to access Your service.
Additionally, if you want a fixed Node port, you can specify that in the yaml.
You need to make sure you add a security rule on your nodes to allow traffic on the particular port.

Kong Ingress Controller at Home

I'm learning about Kubernetes and ingress controllers but I'm stucked getting this error when I try to apply kong ingress manifest...
ingress-kong-7dd57556c5-bh687 0/2 Init:0/1 0 29s
kong-migrations-gzlqj 0/1 Init:0/1 0 28s
postgres-0 0/1 Pending 0 28s
Is it possible to run this ingress on my home server without minikube ? If so, how?
Note: I have a FQDN pointing to my home server.
I guess you run manifest from Github
Issues with Pods
I have reproduced your case. As you have 3 pods, you have used option with DB.
If you will describe pods using
$ kubectl describe pod <podname> -n kong
you will receive error output:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7s (x4 over 17s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
You can also check job in kong namespace.
It is work correctly on fresh Minikube cluster, so I guess you might apply same changes to storageclass.
Is it possible to run this ingress on my home server without minikube ? If so, how?
You have to use Kubernetes to do it. Since Minikube is supporting LoadBalancer you can can use it in Home.
You can check this thread about FQDN. As mentioned:
The host machine should be able to resolve the name of that FQDN. You
might add a record into the /etc/hosts at the Mac host to achieve
that:
10.0.0.2 mydb.mytestdomain
But in your case it should be IP address of LoadBalancer, kong-proxy.
Obtain LoadBalancer IP in Minikube
If you will deploy everything correctly you can check your services.
$ kubectl get svc -n kong
You will see kong-proxy service with LoadBalancer type wit <pending> EXTERNAL-IP.
To obtain ExternalIP you have to use minikbue tunnel.
Please note that you need have $ sudo minikube tunnel run in one console whole time.
Before Minikube tunnel
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 <pending> 80:31881/TCP,443:31319/TCP 103m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 103m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 103m
After
$ kubectl get svc -n kong
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy LoadBalancer 10.110.218.74 10.110.218.74 80:31881/TCP,443:31319/TCP 104m
kong-validation-webhook ClusterIP 10.108.204.137 <none> 443/TCP 104m
postgres ClusterIP 10.105.9.54 <none> 5432/TCP 104m
Testing Kong
Here you can find how to get start with Kong. It will show you how to create Ingress. Later as I mentioned you have to edit ingress and add rule (host) similar like in K8s docs.