Minikube NGINX Ingress return 404 Not Found - kubernetes

I created a deployment, a service and an Ingress to be able to access a NGINX webserver from my host, but I keep getting 404 Not Found. After a lot of hours troubleshooting, I'm getting to a point where some help would be very welcomed.
The steps and related yaml files are below.
Enable Minikube NGINX Ingress controller
minikube addons enable ingress
Create NGINX web server deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver-white
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx-webserver-white
template:
metadata:
labels:
app: nginx-webserver-white
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
Create ClusterIP Service to manage the access to the pods
apiVersion: v1
kind: Service
metadata:
name: webserver-white-svc
labels:
run: webserver-white-svc
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-webserver-white
Create Ingress to access service from outside the Cluster
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webserver-white-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
defaultBackend:
service:
name: webserver-white-svc
port:
number: 80
rules:
- host: white.example.com # This is pointing to the control plane IP
http:
paths:
- backend:
service:
name: webserver-white-svc
port:
number: 80
path: /
pathType: ImplementationSpecific
Tests
When connecting to one pod and executing curl http://localhost, it return the NGINX homepage html, so the pod looks good.
When creating a testing pod and executing curl http://<service-cluster-ip>, it return the NGINX homepage html, so the service looks good.
When connecting to the ingress nginx controller pod and executing curl http://<service-cluster-ip>, it also return the NGINX homepage html, so the connection between the ingress controller and the service looks good.
When connecting to the control plane with minikube ssh and executing ping <nginx-controller-ip> I see that it reaches the nginx controller.
I tested the same, but with a NodePort Service instead of ClusterIP and noticed that I could access the NGINX homepage using the node port, but not the Ingress port.
Any idea what I could be doing wrong and/or what else I could do to better troubleshoot this issue?
Other notes
minikube version: v1.23.0
kubectl version on the client and server: v1.22.1
OS: Ubuntu 18.04.5 LTS (Bionic Beaver)
UPDATE/SOLUTION:
The solution was to add the missing annotation kubernetes.io/ingress.class: "nginx" on the Ingress.

The solution was to add the missing annotation kubernetes.io/ingress.class: "nginx" on the Ingress.

Related

How can I resolve connection issues between my frontend and backend pods in Minikube?

I am trying to deploy an application to Minikube. However I am having issues connecting the frontend pod to the backend pod.
Each Deployment have a ClusterIP service, and a NodePort service.
I access the frontend via browser, executing the command: minikube service frontend-entrypoint. When the frontend tries to query the backend it requests the URL: http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial, but the status response is: (failed)net::ERR_EMPTY_RESPONSE.
If I access the frontend via cmd, executing the command: kubectl exec -it react-deployment-xxxxxxxxxx-xxxxx -- sh, and execute inside it the command: curl -X GET "http://fastapi-cluster-ip-service:8000/api/v1/baseline/building_type?building_type=commercial" I get what I expect.
So, I understand that NodePorts are used to route external traffic to services inside the cluster by opening a specific port on each node in the cluster and forwarding traffic from that port to the service, and that ClusterIPs, on the other hand, are used to expose services only within the cluster and are not directly accessible from outside the cluster. What I don't understand is why when reaching the frontend via browser, the same is not able to connect internally to the backend? Once playing with the frontend I consider I am inside the cluster...
I tried to expose the cluster using other services such as Ingress or LoadBalancer, but I didn't have success connecting to the frontend, so I rollback to the NodePort solution.
References:
Kubernetes Guide - Deploying a machine learning app built with Django, React and PostgreSQL using Kubernetes
Exposing External-Facing Services In Kubernetes
Files:
component_backend.yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-entrypoint
spec:
selector:
component: fastapi
ports:
- name: http2
port: 8000
targetPort: 8000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
component: fastapi
template:
metadata:
labels:
component: fastapi
spec:
containers:
- name: fastapi-container
image: xxx/yyy:zzz
ports:
- containerPort: 8000
env:
- name: DB_USERNAME
valueFrom:
configMapKeyRef:
name: app-variables
key: DB_USERNAME
[...]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
externalIPs:
- <minikube ip>
componente_frontend.yaml:
apiVersion: v1
kind: Service
metadata:
name: frontend-entrypoint
spec:
selector:
component: react
ports:
- name: http1
port: 3000
targetPort: 3000
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-deployment
spec:
replicas: 1
selector:
matchLabels:
app: react
template:
metadata:
labels:
app: react
spec:
containers:
- name: react-container
image: xxx/yyy:zzz
ports:
- containerPort: 3000
env:
- name: BASELINE_API_URL
valueFrom:
configMapKeyRef:
name: app-variables
key: BASELINE_API_URL
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
name: react-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: react
ports:
- port: 3000
targetPort: 3000
externalIPs:
- <minikube ip>
BASELINE_API_URL is declared with the backend ClusterIP service name (i.e., fastapi-cluster-ip-service).
ingress_service.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: sfs.baseline
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-entrypoint
port:
name: http1
- path: /api
pathType: Prefix
backend:
service:
name: backend-entrypoint
port:
name: http2
You have to think about this: If you call the api through your browser or Postman from your host, they dont know anything about the url inside your pod.
For production ingress configuration is needed, so you can call the api like:
https://api.mydomain.com/api/myroute
When you deploy the frontend the paths inside your code should be created dynamically.
Inside your code define the path with env variables and use them inside your code.
On Kubernetes define a configMap with the paths and bind it to your container.
So when you call the Frontend it will have the right paths.
Your Frontend on local can be reached with the Ip address of your masternode and the nodePort.
To not use the ip address you can create an entry in your local hosts file.
nodesipaddress mydomain.local
nodesipaddress api.mydomain.local
so from you browser you can reach the frontend with mydomain.local:nodeportOfFrontend
And your frontend code should call the backend with api.mydomain.local:nodeportOfApi
If you enable Ingress inside your cluster and create an ingress resource in your deployment and a service of type LoadBalancer, then you can call the Frontend and api without the nodePort.
If you are getting in issues with that, please post all your kubernetes yamls. Deployments, Services, configMap and ingress if you decide to use it.
UPDATE
Check if you have ingress enabled on minikube
Modify your ingress-recource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: mydomain.local
http:
paths:
- path: /
Im not sure if minikube supports ClusterIp
Change the type to LoadBalancer in your service:
apiVersion: v1
kind: Service
metadata:
name: fastapi-cluster-ip-service
spec:
type: LoadBalancer
selector:
component: fastapi
ports:
- port: 8000
targetPort: 8000
If you verify your service:
kubectl get svc -o wide
kubectl describe services my-service
Look if there is external IP pending if so add externalIp:
ports:
- port: 8000
targetPort: 8000
externalIPs:
- xxx.xxx.xxx.xxx # your minikube ip
However i would try first to create a service type NodePort and then access with xxx.xxx.xxx.xxx:NodePort.
In the yamls you posted i see only the backend.
Think if you use ingress or NodePort, your code must be adapted:
Your Frontend must call the api by api ingress domain or xxx.xxx.xxx.xxx:NodePort / mydomain.local/api
I think you are not understanding how the frontend part of a website works.
To display the website in your browser, you actually download all the required content from the server/pod where you host the frontend.
Once the front is downloaded and displayed on your browser, now you try to hit the backend directly from your browser/computer using either an IP address or an URL.
When using an URL, you browser will try to resolve the hostname using a DNS server
So when you read a website from your browser you are not in the server/pod and you cannot resolve the URL because that URL is not mapped to any IP address (or not to your server).
Also that is why it works when you go inside the pod using kubectl exec, because you are inside the network and you are using the internal DNS.
As David said, to make this work, you need to call the backend from the frontend using sone kind of ingress.
And also you will need to create a DNS entry on your domain (if using an URL).
If not you can directly use the IP of the pod/service.

Localhost kubernetes ingress not exposing services to local machine

I'm running kuberenetes in localhost, the pod is running and I can access the services when I port forwarding:
kubectl port-forward svc/my-service 8080:8080
I can get/post etc. the services in localhost.
I'm trying to use it with ingress to access it, here is the yml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
I've also installed the ingress controller. But it isn't working as expected. Anything wrong with this?
EDIT: the service that Im trying to connect with ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels: my-service
app: my-service
spec:
containers:
- image: test/my-service:0.0.1-SNAPSHOT
name: my-service
ports:
- containerPort:8080
... other spring boot override properties
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
type: ClusterIP
selector:
app: my-service
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
service is working by itself though
EDIT:
It worked when I used https instead of http
Is ingress resource in the same namespace as the service? Can you share the manifest of service? Also, what do logs of nginx ingress-controller show and what sort of error do you face when hitting the endpoint in the browser?
Ingress's YAML file looks OK to me BTW.
I was being stupid. It worked when I used https instead of http

Kubernetes, Loadbalancing, and Nginx Ingress - AKS

Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.

502 bad gateway using Kubernetes with Ingress controller

I have a Kubernetes setup on Minikube with this configuration:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myappdeployment
spec:
replicas: 5
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: custom-docker-image:latest
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: example-service
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 3000
# Port to forward to inside the pod
targetPort: 3000
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
- http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 3000
I took a look at the solution to this Stackoverflow post and it seems my configuration is.. fine?
What am I doing wrong to get 502 Bad Gateway when accessing http://192.xxx.xx.x, my Minikube address? The nginx-ingress-controller logs say:
...
Connection refused while connecting to upstream
...
Another odd piece of info, when I follow this guide to setting up the basic node service on Kubernetes, everything works and I see a "Hello world" page when I open up the Minikube add
Steps taken:
I ran
kubectl port-forward pods/myappdeployment-5dff57cfb4-6c6bd 3000:3000
Then I visited localhost:3000 and saw
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
I figured the reason is more obvious in the logs so I ran
kubectl logs pods/myappdeployment-5dff57cfb4-6c6bd
and got
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
Waiting for MySQL...
no destination
...
Thus I reasoned that I was originally getting a 502 because all of the pods were not running due to no MySQL setup.

Nginx Ingress Failing to Serve

I am new to k8s
I have a deployment file that goes below
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: jenkins
image: jenkins
ports:
- containerPort: 8080
- containerPort: 50000
My Service File is as following:
apiVersion: v1
kind: Service
metadata:
name: jenkins-svc
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
name: http
selector:
component: web
My Ingress File is
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: jenkins.xyz.com
http:
paths:
- path: /
backend:
serviceName: jenkins-svc
servicePort: 80
I am using the nginx ingress project and my cluster is created using kubeadm with 3 nodes
nginx ingress
I first ran the mandatory command
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
when I tried hitting jenkins.xyz.com it didn't work
when I tried the command
kubectl get ing
the ing resource doesnt get an IP address assigned to it
The ingress resource is nothing but the configuration of a reverse proxy (the Ingress controller).
It is normal that the Ingress doesn't get an IP address assigned.
What you need to do is connect to your ingress controller instance(s).
In order to do so, you need to understand how they're exposed in your cluster.
Considering the YAML you claim you used to get the ingress controller running, there is no sign of exposition to the outside network.
You need at least to define a Service to expose your controller (might be a load balancer if the provider where you put your cluster supports it), you can use HostNetwork: true or a NodePort.
To use the latest option (NodePort) you could apply this YAML:
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/service-nodeport.yaml
I suggest you read the Ingress documentation page to get a clearer idea about how all this stuff works.
https://kubernetes.io/docs/concepts/services-networking/ingress/
In order to access you local Kubernetes Cluster PODs a NodePort needs to be created. The NodePort will publish your service in every node using using its public IP and a port. Then you can access the service using any of the cluster IPs and the assigned port.
Defining a NodePort in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: nginx-service-np
labels:
name: nginx-service-np
spec:
type: NodePort
ports:
- port: 8082 # Cluster IP, i.e. http://10.103.75.9:8082
targetPort: 8080 # Application port
nodePort: 30000 # (EXTERNAL-IP VirtualBox IPs) i.e. http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
protocol: TCP
name: http
selector:
app: nginx
See a full example with source code at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).
The nginx ingress controller can be replaced also with Istio if you want to benefit from a service mesh architecture for:
Load Balance traffic, external o internal
Control failures, retries, routing
Apply limits and monitor network traffic between services
Secure communication
See Installing Istio in Kubernetes under VirtualBox (without Minikube).