GKE Ingress - Simple nginx example yet getting error "Could not find nodeport for backend" - kubernetes

I'm trying to create a simple nginx service on GKE, but I'm running into strange problems.
Nginx runs on port 80 inside the Pod. The service is accessible on port 8080. (This works, I can do curl myservice:8080 inside of the pod and see the nginx home screen)
But when I try to make it publicly accessible using an ingress, I'm running into trouble. Here are my deployment, service and ingress files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
nodePort: 32111
targetPort: 80
type: NodePort
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- http:
paths:
# The * is needed so that all traffic gets redirected to nginx
- path: /*
backend:
serviceName: my-service
servicePort: 80
After a while, this is what my ingress status looks like:
$ k describe ingress test-ingress
Name: test-ingress
Namespace: default
Address: 35.186.255.184
Default backend: default-http-backend:80 (10.44.1.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* my-service:32111 (<none>)
Annotations:
backends: {"k8s-be-30030--ecc76c47732c7f90":"HEALTHY"}
forwarding-rule: k8s-fw-default-test-ingress--ecc76c47732c7f90
target-proxy: k8s-tp-default-test-ingress--ecc76c47732c7f90
url-map: k8s-um-default-test-ingress--ecc76c47732c7f90
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 18m loadbalancer-controller default/test-ingress
Normal CREATE 17m loadbalancer-controller ip: 35.186.255.184
Warning Service 1m (x5 over 17m) loadbalancer-controller Could not find nodeport for backend {ServiceName:my-service ServicePort:{Type:0 IntVal:32111 StrVal:}}: could not find matching nodeport from service
Normal Service 1m (x5 over 17m) loadbalancer-controller no user specified default backend, using system default
I don't understand why it's saying that it can't find nodeport - the service has nodePort defined and it is of type NodePort as well. Going to the actual IP results in default backend - 404.
Any ideas why?

The configuration is missing a health check endpoint, for the GKE loadbalancer to know whether the backend is healthy. The containers section for the nginx should also specify:
livenessProbe:
httpGet:
path: /
port: 80
The GET / on port 80 is the default configuration, and can be changed.

Related

Bare-metal k8s ingress with nginx-ingress

I can't apply an ingress configuration.
I need access a jupyter-lab service by it's DNS
http://jupyter-lab.local
It's deployed to a 3 node bare metal k8s cluster
node1.local (master)
node2.local (worker)
node3.local (worker)
Flannel is installed as the Network controller
I've installed nginx ingress for bare metal like this
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
When deployed the jupyter-lab pod is on node2 and the NodePort service responds correctly from http://node2.local:30004 (see below)
I'm expecting that the ingress-nginx controller will expose the ClusterIP service by its DNS name ...... thats what I need, is that wrong?
This is the CIP service, defined with symmetrical ports 8888 to be as simple as possible (is that wrong?)
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
The DNS name jupyter-lab.local resolves to the ip address range of the cluster, but times out with no response. Failed to connect to jupyter-lab.local port 80: No route to host
firewall-cmd --list-all shows that port 80 is open on each node
This is the ingress definition for http into the cluster (any node) on port 80. (is that wrong ?)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io: /
spec:
rules:
- host: jupyter-lab.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jupyter-lab-cip
port:
number: 80
This the deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jupyter-lab-dpt
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jupyter-lab
template:
metadata:
labels:
app: jupyter-lab
spec:
volumes:
- name: jupyter-lab-home
persistentVolumeClaim:
claimName: jupyter-lab-pvc
containers:
- name: jupyter-lab
image: docker.io/jupyter/tensorflow-notebook
ports:
- containerPort: 8888
volumeMounts:
- name: jupyter-lab-home
mountPath: /var/jupyter-lab_home
env:
- name: "JUPYTER_ENABLE_LAB"
value: "yes"
I can successfully access jupyter-lab by its NodePort http://node2:30004 with this definition:
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-nodeport
namespace: default
spec:
type: NodePort
ports:
- port: 10003
targetPort: 8888
nodePort: 30004
selector:
app: jupyter-lab
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
the command kubectl get endpoints -n ingress-nginx ingress-nginx-controller-admission returns :
ingress-nginx-controller-admission 10.244.2.4:8443 15m
Am I misconfiguring ports ?
Are my "selector:appname" definitions wrong ?
Am I missing a part
How can I debug what's going on ?
Other details
I was getting this error when applying an ingress kubectl apply -f default-ingress.yml
Error from server (InternalError): error when creating "minnimal-ingress.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-contr
oller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": context deadline exceeded
This command kubectl delete validatingwebhookconfigurations --all-namespaces
removed the validating webhook ... was that wrong to do?
I've opened port 8443 on each node in the cluster
Ingress is invalid, try the following:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jupyter-lab-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jupyter-lab.local
http: # <- removed the -
paths:
- path: /
pathType: Prefix
backend:
service:
# name: jupyter-lab-cip
name: jupyter-lab-nodeport
port:
number: 8888
---
apiVersion: v1
kind: Service
metadata:
name: jupyter-lab-cip
namespace: default
spec:
type: ClusterIP
ports:
- port: 8888
targetPort: 8888
selector:
app: jupyter-lab
If I understand correctly, you are trying to expose jupyternb through ingress nginx proxy and to make it accessible through port 80.
Run the folllowing command to check what nodeport is used by nginx ingress service:
$ kubectl get svc -n ingress-nginx ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.96.240.73 <none> 80:30816/TCP,443:31475/TCP 3h30m
In my case that is port 30816 (for http) and 31475 (for https).
Using NodePort type you can only use ports in range 30000-32767 (k8s docs: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport). You can change it using kube-apiserver flag --service-node-port-range and then set it to e.g. 80-32767 and then in your ingress-nginx-controller service set nodePort: 80
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.44.0
helm.sh/chart: ingress-nginx-3.23.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
nodePort: 80 # <- HERE
- name: https
port: 443
protocol: TCP
targetPort: https
nodePort: 443 # <- HERE
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
Although this is genereally not advised to change service-node-port-range since you may encounter some issues if you use ports that are already open on nodes (e.g. port 10250 that is opened by kubelet on every node).
What might be a better solution is to use MetalLB.
EDIT:
How can I get ingress to my jupyter-lab at http://jupyter-lab.local ???
Assuming you don't need a failure tolerant solution, download the https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml file and change ports: section for the deployment object like following:
ports:
- name: http
containerPort: 80
hostPort: 80 # <- add this line
protocol: TCP
- name: https
containerPort: 443
hostPort: 443 # <- add this line
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
and apply the changes:
kubectl apply -f deploy.yaml
Now run:
$ kubectl get po -n ingress-nginx ingress-nginx-controller-<HERE PLACE YOUR HASH> -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-controller-67897c9494-c7dwj 1/1 Running 0 97s 172.17.0.6 <node_name> <none> <none>
Notice the <node_name> in NODE column. This is a node's name where the pod got scheduled. Now take this nodes IP and add it to your /etc/hosts file.
It should work now (go to http://jupyter-lab.local to check it), but this solution is fragile and if nginx ingress controller pod gets rescheduled to other node it will stop working (and it will stay lik this until you change the ip in /etc/hosts file). It's also generally not advised to use hostPort: field unless you have a very good reason to do so, so don't abuse it.
If you need failure tolerant solution, use MetalLB and create a service of type LoadBalancer for nginx ingress controller.
I haven't tested it but the following should do the job, assuming that you correctly configured MetalLB:
kubectl delete svc -n ingress-nginx ingress-nginx-controller
kubectl expose deployment -n ingress-nginx ingress-nginx-controller --type LoadBalancer

Kubernetes: Route Kubernetes dashboard through Ingress with out host and without proxy

Cluster information:
Installation Method: kubeadm
Kubernetes: 1.19.2
Master & Nodes: Ubuntu 20.04.1 (Oracle Virutalbox)
Docker: 19.03.12
Calico: 3.16.1
Ingress : Bare-metal - 0.40.1
I want to access the Kubernetes dashboard from my laptop using ingress without proxy?
Can anyone help me with the steps... ( I tried multiple ways with the help of the internet... not sure where I am missing?)
Note: As per discussion forums I have added "hostNetwork: true" under the deployment section in ingress YAML to resolve "not working without host parameter" and commented "type: NodePort".
Updated info:
I have created ingress-controller as daemon instead of deployments/pod - this helps in accessing directly with worker IPs. (this is what I am expecting - but unable to access kubernetes dashboard as it is in different namespace)
Ingress yaml: this is running in default namespace
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kdash-in-ns
port:
number: 443
kdash-in-ns yaml - svc with External Name
kind: Service
apiVersion: v1
metadata:
name: kdash-in-ns
namespace: default
spec:
type: ExternalName
externalName: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
ports:
- name: https
port: 443
Below details about kdash-in-ns svc with ExternalName
dockeras#ubuntu3:~/simplek8s/kubernetes/yamls/ingress-demo$ kubectl describe svc kdash-in-ns
Name: kdash-in-ns
Namespace: default
Labels: <none>
Annotations: <none>
Selector: <none>
Type: ExternalName
IP:
External Name: kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
Port: https 443/TCP
TargetPort: 443/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
kubectl describe for the updated ingress route: in this i have ngnix - which is working fine (i guess both ingress and nginx are in same namespace.. getting error for dashboard - as it in different namespace (kubernetes-dasbhoard))
dockeras#ubuntu3:~$ kubectl describe ing nginx-ingress
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Name: nginx-ingress
Namespace: default
Address: 192.168.1.31,192.168.1.32
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/nginx nginx-deploy-main:80 )
/foo kubernetes-dashboard:443 (<error: endpoints "kubernetes-dashboard" not found>)
/dashboard kdash-in-ns:443 (<error: endpoints "kdash-in-ns" not found>)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 4m40s nginx-ingress-controller Ingress default/nginx-ingress
When I tried the same URLs in browser below are the responses (One of my worker iP - 192.168.1.31)
192.168.1.31/nginx - responds with nginx default page (pod - nginx-deploy-main)
192.168.1.31/foo - error page - 503 service temporarily Unavailable (default nginx)
192.168.1.31/dashboard - 504 Gateway Time-out (default nginx)
running svc, pods:
All Pods and svcs
If I understand correctly, you want to access kubernetes service (dashboard) from outside cluster. You may deploy metallb LoadBalancer and manage a pull of IPs from external cluster network, assigned to your cluster.
So, you can assign an IP and a LoadBalancer through which you will access your service. Below is an example for mssql server, but you can easily adapt it to your needs with dashboard:
apiVersion: v1
kind: Service
metadata:
name: sql-server-lb
namespace: database-server
annotations:
metallb.universe.tf/address-pool: default
spec:
selector:
app: sql-server
ports:
- port: 1433
targetPort: 1433
type: LoadBalancer
https://metallb.universe.tf/

Ingress on GKE. Run specify service healthcheck

I try to use ingress for loadbalancer of 2 services on Google Kubernetes engine:
here is ingress config for this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: web
servicePort: 8080
- path: /v2/keys
backend:
serviceName: etcd-np
servicePort: 2379
where web is some example service from google samples:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: web
ports:
- containerPort: 8080
protocol: TCP
But the second service is ETCD cluster with NodePort service:
---
apiVersion: v1
kind: Service
metadata:
name: etcd-np
spec:
ports:
- port: 2379
targetPort: 2379
selector:
app: etcd
type: NodePort
But only first ingress rule works properly i see in logs:
ingress.kubernetes.io/backends: {"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
I etcd-np works properly it is not a problem of etcd , i think that the problem is that etcd server answers with 404 on GET / request and some healthcheck on ingress level does not allow to use it .
Thats why i have 2 questions :
1 ) How can I provide healthcheck urls for each backend path on ingress
2 ) How can I debug such issues . What I see now is
kubectl describe ingress basic-ingress
Name: basic-ingress
Namespace: default
Address: 4.4.4.4
Default backend: default-http-backend:80 (10.52.6.2:8080)
Rules:
Host Path Backends
---- ---- --------
*
/* web:8080 (10.52.8.10:8080)
/v2/keys etcd-np:2379 (10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379)
Annotations: ingress.kubernetes.io/backends:
{"k8s-be-30195--ebfd7339a961462d":"UNHEALTHY","k8s-be-30553--ebfd7339a961462d":"HEALTHY","k8s-be-31529--ebfd7339a961462d":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/target-proxy: k8s-tp-default-basic-ingress--ebfd7339a961462d
ingress.kubernetes.io/url-map: k8s-um-default-basic-ingress--ebfd7339a961462d
Events: <none>
But it does not provide me any info about this incident
UP
kubectl describe svc etcd-np
Name: etcd-np
Namespace: default
Labels: <none>
Annotations: Selector: app=etcd
Type: NodePort
IP: 10.4.7.20
Port: <unset> 2379/TCP
TargetPort: 2379/TCP
NodePort: <unset> 30195/TCP
Endpoints: 10.52.0.2:2379,10.52.2.4:2379,10.52.8.4:2379
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
According to the doc.
A Service exposed through an Ingress must respond to health checks
from the load balancer. Any container that is the final destination of
load-balanced traffic must do one of the following to indicate that it
is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness
probe. The Service exposed through an Ingress must point to the same
container port on which the readiness probe is enabled.
For example, suppose a container specifies this readiness probe:
...
readinessProbe:
httpGet:
path: /healthy
Then if the handler for the container's /healthy path returns an
HTTP 200 status, the load balancer considers the container to be
alive and healthy.
Now since ETCD has a health endpoint at /health the readiness probe will look like
...
readinessProbe:
httpGet:
path: /health
This becomes a bit tricky if mTLS is enabled in ETCD. To avoid that check the docs.

Ingress-Nginx-Controller failed to find the 2nd service deployed on Google Cloud Platform

I deployed following 2 services (built in Java) on GCP:
mply6 (service 1, listening on port 8080 external of GCP), corresponding to the URL: http://example.com/path1
gami6 (service 2, listening on port 8081 external of GCP), corresponding to the URL: http://example.com/path2
The yaml to deploy and expose the service 1:
kind: Service
apiVersion: v1
metadata:
name: mply6
spec:
selector:
app: mply6
ports:
- protocol: "TCP"
port: 80
targetPort: 8080
type: LoadBalancer
loadBalancerIP: "35.223.241.9"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mply6
spec:
replicas: 1
selector:
matchLabels:
app: mply6
template:
metadata:
labels:
app: mply6
spec:
containers:
- name: mply6
image: gcr.io/mply6-271000/mply6:latest
ports:
- containerPort: 8080
The yaml to deploy and expose the service 2:
kind: Service
apiVersion: v1
metadata:
name: gami6
spec:
selector:
app: gmai6
ports:
- protocol: "TCP"
port: 81
targetPort: 8081
type: LoadBalancer
loadBalancerIP: "35.223.241.9"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gami6
spec:
replicas: 1
selector:
matchLabels:
app: gami6
template:
metadata:
labels:
app: gami6
spec:
containers:
- name: gami6
image: gcr.io/mply6-271000/gami6:latest
ports:
- containerPort: 8081
And, the yaml to create the Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "35.223.241.9"
spec:
rules:
- http:
paths:
- path: /path1
backend:
serviceName: mply6
servicePort: 80
- path: /path2
backend:
serviceName: gami6
servicePort: 81
Furthermore, the result of 'kubectl describe ingress basic-ingress':
Name: basic-ingress
Namespace: default
Address: 35.244.199.199
Default backend: default-http-backend:80 (10.60.1.4:8080)
Rules:
Host Path Backends
---- ---- --------
*
/multiplications/random mply6:80 (10.60.0.32:8080)
/results mply6:80 (10.60.0.32:8080)
/leaders gami6:81 (10.60.0.32:8081)
/stats gami6:81 (10.60.0.32:8081)
Annotations:
kubernetes.io/ingress.global-static-ip-name: 35.223.241.9
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 35m nginx-ingress-controller Ingress default/basic-ingress
Warning Translate 21m loadbalancer-controller error while evaluating the ingress spec: could not find service "default/gami6"; could not find service"
default/gami6"
Warning Translate 6m17s (x34 over 77m) loadbalancer-controller error while evaluating the ingress spec: could not find port "8081" in service "default/gami6"; could no
t find port "8081" in service "default/gami6"
Normal CREATE 44s (x1153 over 22h) loadbalancer-controller ip: 35.244.199.199
Normal UPDATE 7s (x13 over 35m) nginx-ingress-controller Ingress default/basic-ingress
Basically I'm expecting that when I give the URL 'http://example.com/path2', the Ingress-Nginx-Controller would find the 2nd service 'gami6', but why is the above error message: could not find service "default/gami6"? (http://example.com/path1 can be found without a problem in the case)
First, I’ve noticed a typo the second service yaml:
spec:
selector:
app: gmai6 <-- should this be gami6?
To use Google’s Ingress with more than one backend you may need to use “NodePort” instead of “LoadBalancer” to expose the services. You can find about this on this documentation:
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#multiple_backend_services
In case that you want to use nginx ingress controller you should follow this: guide: https://cloud.google.com/community/tutorials/nginx-ingress-gke
I cannot add any remarks on the deployment you have above. However this deployment is very similar to the http-balancer documentation
I tried the deployment mentioned in the documentation and I was able to achieve what you are trying to do. Having a load-balancer routing requests based on the path.

Kubernetes Ingress not resolving backend service

I'm trying to create an ingress within minikube. I have already enabled the ingress add on and checked all the associated services and pods have been added and are running.
When I create the ingress I point it to a service.NodePort that is in the same namespace as the ingress. But when I describe the ingress the backend IP address is <none>
This is my deployment yaml
apiVersion: v1
kind: Namespace
metadata:
name: proxy
labels:
name: proxy
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
namespace: proxy
labels:
app: proxy
spec:
replicas: 1
template:
metadata:
labels:
app: proxy
spec:
containers:
- name: proxy
image: wildapplications/proxy:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
imagePullSecrets:
- name: regsecret
---
apiVersion: v1
kind: Service
metadata:
name: service
namespace: proxy
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
selector:
app: proxy
externalName: proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
namespace: proxy
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: echo.example.com
http:
paths:
- path: /test
backend:
serviceName: service
servicePort: 8080
when I describe the ingress I get
Name: ingress
Namespace: proxy
Address: 192.168.99.100
Default backend: default-http-backend:80 (172.17.0.14:8080)
Rules:
Host Path Backends
---- ---- --------
echo.example.com
/test service:8080 (<none>)
Annotations:
rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 16m ingress-controller Ingress proxy/ingress
Normal CREATE 15m ingress-controller Ingress proxy/ingress
Normal UPDATE 15m ingress-controller Ingress proxy/ingress
Is there anything glaringly obvious as to why the ingress isnt resolving the backend specified to the service created directly above it?
I found the solution to my question so i'll post just in case someone else comes across something similar.
I was trying to access the ingress through my minikube ip address (minikube ip to get the ip), this was providing a 404 because I was not using the host to navigate to it.
To solve the 404 I executed
echo "$(minikube ip) echo.example.com" | sudo tee -a /etc/hosts
and then from there navigating to the host url in my browser.