Make istio-ingress working with metallb bare metal kubernetes cluster - kubernetes

Update 14-03-2021
Metallb LoadBalancer IP 192.168.0.21 accessible from Cluster (Master/Nodes) Only.
root#C271-KUBE-NODE-0-04:~# curl -s -I -HHost:httpbin.example.com "http://192.168.0.21:80/status/200"
HTTP/1.1 200 OK
server: istio-envoy
date: Sun, 14 Mar 2021 17:32:36 GMT
content-type: text/html; charset=utf-8
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 0
x-envoy-upstream-service-time: 2
Issue
Trying to get istio working with metallb on Vmware ESXI.
Intalled MetalLb with helm install metallb bitnami/metallb -n metallb-system -f metallb-config.yaml
configInline:
address-pools:
- name: prod-k8s-pool
protocol: layer2
addresses:
- 192.168.0.21
Used https://istio.io/latest/docs/setup/install/helm/ to install istio.
helm install istio-base manifests/charts/base --set global.jwtPolicy=first-party-jwt -n istio-system
helm install istiod manifests/charts/istio-control/istio-discovery --set global.jwtPolicy=first-party-jwt -n istio-system
helm install istio-ingress manifests/charts/gateways/istio-ingress --set global.jwtPolicy=first-party-jwt -n istio-system
helm install istio-egress manifests/charts/gateways/istio-egress --set global.jwtPolicy=first-party-jwt -n istio-system
❯ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpbin LoadBalancer 10.104.32.168 <none> 8000:32483/TCP 16m
istio-egressgateway ClusterIP 10.107.11.137 <none> 80/TCP,443/TCP,15443/TCP 20m
istio-ingressgateway LoadBalancer 10.109.199.203 192.168.0.21 15021:32150/TCP,80:31977/TCP,443:30960/TCP,15012:30927/TCP,15443:31439/TCP 31m
istiod ClusterIP 10.96.10.193 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 33m
At the same time, metallb controller logs say it allocated IP.
metallb-system/metallb-controller-64c58bc7c6-bks6m[metallb-controller]: {"caller":"service.go:114","event":"ipAllocated","ip":"192.168.0.21","msg":"IP address assigned by controller","s
ervice":"istio-system/istio-ingressgateway","ts":"2021-03-14T09:20:12.906308842Z"}
I am trying to install a simple sample HTTPBIN using https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/
kubectl apply -f <(istioctl kube-inject -f samples/httpbin/httpbin.yaml)
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
EOF
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
route:
- destination:
port:
number: 8000
host: httpbin
EOF
But the IP 192.168.0.21 never resolves. From other machines in the same network.
curl -s -I -HHost:httpbin.example.com "http://192.168.0.21:80/status/200"
I tried Nginx-ingress installation with
spec:
type: LoadBalancer
loadBalancerIP: 192.168.0.21
that is working fine, Can anybody guide how istio will work with bare metal metallb.

Related

Why not working request to deployment via service request and via ingress request?

Install minikube version: v1.29.0 on MacOs.
I create API endpoint on flask and build in docker image
FROM debian:latest
COPY . /app
WORKDIR /app
RUN pip3 install --no-cache-dir -r requirements.txt
CMD ["uwsgi", "--socket", "0.0.0.0:5001", "--protocol=http", "-w", "wsgi:app", "--ini", "wsgi.ini"]
after load docker image into minikube
minikube image load drnoreg/devops_blog:0.0.1
check minikube
% minikube image ls
docker.io/drnoreg/devops_blog:0.0.1
create deployment, service and ingress yaml
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: devops-blog
spec:
selector:
matchLabels:
run: devops-blog
replicas: 1
template:
metadata:
labels:
run: devops-blog
spec:
containers:
- name: devops-blog
image: docker.io/drnoreg/devops_blog:0.0.1
ports:
- name: pod-port
containerPort: 5001
---
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: NodePort
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
nodePort: 30001
selector:
run: devops-blog
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
execute create namespace
kubectl create namespace devops-blog
set current namespace
kubectl config set-context --current --namespace=devops-blog
and create deployment, service and ingress
kubectl create -f app.yaml
after try forwarding port for check working flask API
kubectl port-forward devops-blog-f666d8cd7-njp95 5001:5001
Forwarding from 127.0.0.1:5001 -> 5001
Forwarding from [::1]:5001 -> 5001
Handling connection for 5001
Handling connection for 5001
flask API service in minikube is working.
% kubectl get service -n devops-blog -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
devops-blog NodePort 10.99.37.126 <none> 5001:30001/TCP 45s run=devops-blog
% kubectl get pod -n devops-blog -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
devops-blog-f666d8cd7-b9n7j 1/1 Running 0 57s 10.244.0.34 minikube <none> <none>
% kubectl get node -n devops-blog -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 16h v1.26.1 192.168.49.2 <none> Ubuntu 20.04.5 LTS 5.10.47-linuxkit docker://20.10.23
Now I try to check working API via minikube service
% telnet 192.168.49.2 30001
Trying 192.168.49.2...
not working
add to /etc/hosts
127.0.0.1 devops-blog.cluster.local
try to check working API via ingress minikube
% telnet devops-blog.cluster.local 80
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
not working too.
Why not working request to deployment via service request and via ingress request?
How solve this problem?
In case you did not enable the ingress addon try enable it by executing the following command
$ minikube addons enable ingress
Instead of NodePort service try using the clusterIP service for the app and when you are creating ingress you can give this service as backend like this
service.yaml
apiVersion: v1
kind: Service
metadata:
name: devops-blog
labels:
run: devops-blog
spec:
type: ClusterIP
ports:
- name: pod-port
port: 5001
targetPort: 5001
protocol: TCP
selector:
run: devops-blog
ingres.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: devops-blog
namespace: devops-blog
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false" #Since you are using localhost
spec:
rules:
- host: devops-blog.cluster.local
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: devops-blog
port:
number: 5001
path: /
Once the ingress is generated the IP try opening it in local browser with http://devops-blog.cluster.local/ or curl it like curl $ curl http://devops-blog.cluster.local/.
Note: In case you are deploying this app in the cloud try LoadBalancer as a service.
Try this tutorial as it explained in detail

How to make My First ingress work on baremetal NodeIP?

I have pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
namespace: dev
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-app:2.0"
Make service:
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
namespace: dev
labels:
app: hello
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
Check it:
---
apiVersion: v1
kind: Service
metadata:
name: hello-node-service
namespace: dev
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
$ kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node-service NodePort 10.233.3.50 <none> 80:31263/TCP 9h
hello-service ClusterIP 10.233.45.159 <none> 80/TCP 44h
$ curl -I http://cluster.local:31263
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 07:31:28 GMT
Content-Length: 66
Content-Type: text/plain; charset=utf-8
I have verified that the service is working.
Install ingress with NodeIP (https://kubernetes.github.io/ingress-nginx/deploy/):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.0/deploy/static/provider/baremetal/deploy.yaml
$ kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-7gsft 0/1 Completed 0 10h
ingress-nginx-admission-patch-qj57b 0/1 Completed 1 10h
ingress-nginx-controller-8cf5559f8-mh6fr 1/1 Running 0 10h
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.233.52.118 <none> 80:30377/TCP,443:31682/TCP 10h
ingress-nginx-controller-admission ClusterIP 10.233.51.175 <none> 443/TCP 10h
Check it:
$ curl -I http://cluster.local:30377/healthz
HTTP/1.1 200 OK
Date: Sat, 11 Sep 2021 07:39:04 GMT
Content-Type: text/html
Content-Length: 0
Connection: keep-alive
Make ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-hello
namespace: dev
spec:
rules:
- host: cluster.local
http:
paths:
- backend:
service:
name: hello-service
port:
number: 80
path: "/hello"
pathType: Prefix
Check It:
$ curl -I http://cluster.local:30377/hello
HTTP/1.1 404 Not Found
Date: Sat, 11 Sep 2021 07:40:43 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive
It's doesn't work. I spend few days, tried add ExternalIP to ingress controller.
Can you please tell me who had the experience of setting up ingress, what am I doing wrong?
=(((
INFO about cluster:
$ kubectl get ingress -n dev
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-hello <none> cluster.local 80 10h
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kuber-ingress-01 Ready worker 10d v1.21.3
kuber-master1 Ready control-plane,master 10d v1.21.3
kuber-master2 Ready control-plane,master 10d v1.21.3
kuber-master3 Ready control-plane,master 10d v1.21.3
kuber-node-01 Ready worker 10d v1.21.3
kuber-node-02 Ready worker 10d v1.21.3
kuber-node-03 Ready worker 10d v1.21.3
Inventory:
kuber-master1 10.0.57.31
kuber-master2 10.0.57.32
kuber-master3 10.0.57.33
kuber-node-01 10.0.57.34
kuber-node-02 10.0.57.35
kuber-node-03 10.0.57.36
kuber-ingress-01 10.0.57.30
$ ping cluster.local
PING cluster.local (10.0.57.30) 56(84) bytes of data.
64 bytes from ingress.example.com (10.0.57.30): icmp_seq=1 ttl=62 time=0.603 ms
The solution is to add the following content to the ingress - annotation.
Then the ingress controller starts to see the DNS addresses.
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
Also, for convenience, changed path: / to a regular expression:
- path: /v1(/|$)(.*)

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.
I am running Kubernetes version 1.11.16-gke.2 on GCP.
I setup my fresh cluster like this:
gcloud container clusters get-credentials gcp-cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
I run the deployment:
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
EOF
Then I create service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: hello-node
EOF
and ingress resource:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-node-single-ingress
spec:
backend:
serviceName: hello-node
servicePort: 80
EOF
I get the node external IP:
12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"
Check if ingress is running:
12:50 $ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
hello-node-single-ingress * 35.197.204.75 80 8m
12:50 $ kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m
12:50 $ kubectl get svc --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller ClusterIP 10.43.250.102 <none> 80/TCP,443/TCP 24m
ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 <none> 80/TCP 24m
Then trying to connect from the external network:
curl 35.197.204.75
Unfortunately it times out
On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup:
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
which mentions:
"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."
I've tried to follow that and delete ingress-nginx services:
kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend
but this doesn't help.
Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !
EDIT:
When another service accessing my deployment with NodePort gets created:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node2
spec:
ports:
- port: 80
targetPort: 8080
type: NodePort
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 2m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 6s
I still can't access my service e.g. using: curl 35.197.204.75:31151.
However when I create 3rd service with LoadBalancer type:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node3
spec:
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 7m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 4m
hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s
I can access my service using the external LB: 35.189.106.111 IP.
The problem was missing firewall rules on GCP.
Found the answer: https://stackoverflow.com/a/42040506/2263395
Running:
gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301
Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.

No ingress address on minikube Kubernetes cluster with nginx ingress controller

I've got the following:
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: abcxyz
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: abcxyz
http:
paths:
- path: /a/
backend:
serviceName: service-a
servicePort: 80
- path: /b/
backend:
serviceName: service-b
servicePort: 80
Output of kubectl describe ingress abcxyz:
Name: abcxyz
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
abcxyz
/a/ service-a:80 (<none>)
/b/ service-b:80 (<none>)
Annotations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16m nginx-ingress-controller Ingress default/abcxyz
Normal UPDATE 12m (x2 over 15m) nginx-ingress-controller Ingress default/abcxyz
Why is the address empty? I've installed the 'nginx ingress controller' through helm using helm install stable/nginx-ingress - and all of it's pods relevent seem to be running fine.
How can I provide access to the ingress?
The solution for me was:
minikube addons enable ingress
Type
minikube ip
to retrieve the master IP. for example:
bash-3.2$ minikube ip
192.168.1.100
The command that provides information about the kubernetes cluster is:
bash-3.2$ kubectl cluster-info
Kubernetes master is running at https://192.168.1.100:8443
KubeDNS is running at https://192.168.1.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You can test the ingress controller from the host machine using curl:
bash-3.2$ curl http://192.168.1.100:80
default backend - 404
Finally, add a host entry to be able to use a name to refer to the cluster IP address
In /etc/hosts add:
192.168.1.100 abcxyz
There appears to be a bug in https://helm.nginx.com/stable that makes it not bind to the Address in minikube.
The solution that worked for me was to instead use https://kubernetes.github.io/ingress-nginx
The installation instructions for the kubernetes version of NGINX ingress are here: https://kubernetes.github.io/ingress-nginx/deploy/, but here's the gist:
Helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
Minikube
minikube addons enable ingress
microk8s
microk8s enable ingress
Also to note, the "bare metal" installation instructions use a NodePort. But most IaaS providers have their own way of assigning IPs, so they have specific instructions for each provider.
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller