Ingress responding with 'default backend - 404' when using GKE - kubernetes

Using the latest Kubernetes version in GCP (1.6.4), I have the following Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myproject
namespace: default
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: staging.myproject.io
http:
paths:
- path: /poller
backend:
serviceName: poller
servicePort: 8080
Here is my service and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller
labels:
app: poller
tier: backend
role: service
spec:
type: NodePort
selector:
app: poller
tier: backend
role: service
ports:
- port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/myproject-1364/poller:latest
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: staging
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
In my /etc/hosts I have a line like:
35.190.37.148 staging.myproject.io
However, I get default backend - 404 when curling any endpoint on staging.myproject.io:
$ curl staging.myproject.io/poller/cache/status
default backend - 404
I have the exact same configuration working locally inside Minikube, with the only difference being the domain (dev.myproject.io), and that works like a charm.
I have read and tried pretty much everything that I could find, including stuff from here and here and here, but maybe I'm just missing something... any ideas?

It does take 5-10 minutes for an Ingress to actually become usable in GKE. In the meanwhile, you can see responses with status codes 404, 502 and 500.
There is an ingress tutorial here: https://cloud.google.com/container-engine/docs/tutorials/http-balancer I recommend following it. Based on what you pasted, I can say the following:
You use service.Type=NodePort, which is correct.
I am not sure about the ingress.kubernetes.io/rewrite-target annotation, maybe that's the issue.
Make sure your application responds 200 OK to GET / request.
Also I realize you curl http://<ip>/ but your Ingress spec only handles /poller endpoint. So it's normal you get default backend - 404 response while querying /. You didn't configure any backends for / path in your Ingress spec.

If anyone else is facing this problem, check if header Host is correct and matches expected domain.

Related

Kubernetes Internal Service Axios NuxtJS

I'm trying to learn Kubernetes as I go and I'm currently trying to deploy a test application that I made.
I have 3 containers and each container is running on its own pod
Front end App (Uses Nuxtjs)
Backend API (Nodejs)
MongoDB
For the Front End container I have configured an External Service (LoadBalancer) which is working fine. I can access the app from my browser with no issues.
For the backend API and MongoDB I configured an Internal Service for each. The communication between Backend API and MongoDB is working. The problem that I'm having is communicating the Frontend with the Backend API.
I'm using the Axios component in Nuxtjs and in the nuxtjs.config.js file I have set the Axios Base URL to be http://service-name:portnumber/. But that does not work, I'm guessing its because the url is being call from the client (browser) side and not from the server. If I change the Service type of the Backend API to LoadBalancer and configure an IP Address and Port Number, and use that as my axios URL then it works. However I was kind of hoping to keep the BackEnd-API service internal. Is it possible to call the Axios base URL from the server side and not from client-side.
Any help/guidance will be greatly appreciated.
Here is my Front-End YML File
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mhov-ipp
name: mhov-ipp
namespace: mhov
spec:
replicas: 1
selector:
matchLabels:
app: mhov-ipp
template:
metadata:
labels:
app: mhov-ipp
spec:
containers:
- image: mhov-ipp:1.1
name: mhov-ipp
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: "development"
- name: PORT
value: "8080"
- name: TITLE
value: "MHOV - IPP - K8s"
- name: API_URL
value: "http://mhov-api-service:4000/"
---
apiVersion: v1
kind: Service
metadata:
name: mhov-ipp-service
spec:
selector:
app: mhov-ipp
type: LoadBalancer
ports:
- protocol: TCP
port: 8082
targetPort: 8080
nodePort: 30600
Here is the backend YML File
apiVersion: apps/v1
kind: Deployment
metadata:
name: mhov-api-depl
labels:
app: mhov-api
spec:
replicas: 1
selector:
matchLabels:
app: mhov-api
template:
metadata:
labels:
app: mhov-api
spec:
containers:
- name: mhov-api
image: mhov-api:1.0
ports:
- containerPort: 4000
env:
- name: mongoURI
valueFrom:
configMapKeyRef:
name: mhov-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mhov-api-service
spec:
selector:
app: mhov-api
ports:
- protocol: TCP
port: 4000
targetPort: 4000
What is ingress and how to install it
Your guess is correct. Frontend is running in browser and browser "doesn't know" where backend is and how to reach out to it. You have two options here:
as you did with exposing backend outside your cluster
use advanced solution such as ingress
This will move you forward and will need to change some configuration of your application such as URL since application will be exposed to "the internet" (not really, but you can do it using cloud).
What is ingress:
Ingress is api object which exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Most common option is nginx ingress - their page NGINX Ingress Controller.
Installation depends on cluster type, however I suggest using helm. (if you're not familiar with helm, it's a template engine which uses charts to install and setup application. There are quite a lot already created charts, e.g. ingress-nginx.
If you're using minikube for example, it already has built-in nginx-ingress and can be enabled as addon.
How to expose services using ingress
Once you have working ingress, it's type to create rules for it.
What you need is to have ingress which will be able to communicate with frontend and backend as well.
Example taken from official kubernetes documentation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
pathType: Prefix
backend:
service:
name: service1
port:
number: 4200
- path: /bar
pathType: Prefix
backend:
service:
name: service2
port:
number: 8080
In this example, there are two different services available on different paths within the foo.bar.com hostname and both services are within the cluster. No need to expose them out of the cluster since traffic will be directed through ingress.
Actual solution (how to approach)
This is very similar configuration which was fixed and started working as expected. This is my answer and safe to share :)
As you can see OP faced the same issue when frontend was accessible, while backend wasn't.
Feel free to use anything out of that answer/repository.
Useful links:
kubernetes ingress

GKE Ingress Bad Gateway or Backend not found

I'm creating this issue though there are various answers available but couldn't answer my problem. I'm using GKE Ingress in my example.
I've been using GKE and setup GKE Ingress mere to manage the path for all which we have kept in GCR.
I've created a .net core based API which is has various path , if I expose the deployment with service as type Loadbalancer it works perfectly. Though when I utilize GKE Ingress and setup service and its backend it throws an error like Bad Gateway or sometime backend not found.
.Net core api application exposes following path when i exposed on service as loadbalancer, kindly follow below screenshot for reference:
http://35.202.38.40/api/books
http://35.202.38.40/api/categories
The same way we do have other post api's.
Now I'm stuck when I utilize GKE ingress and setup my ingress.yaml as follows , it doesn't work and throws an error.
Ingress.Yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gke-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 80
- path: /kube
backend:
serviceName: hello-kubernetes
servicePort: 80
NOTE: my hello-word is exposing a .Net Core API application, kindly don't get confuse with name as "hello-world" for testing.
Service Yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: LoadBalancer
selector:
greeting: hello
department: world
ports:
- protocol: TCP
port: 80
targetPort: 80
Deployment Yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
selector:
matchLabels:
greeting: hello
department: world
replicas: 2
template:
metadata:
labels:
greeting: hello
department: world
spec:
containers:
- name: hello
image: gcr.io/gcpone-yuy-123114/oneplus2:631
ports:
- containerPort: 80
Kindly advise how to setup path based routing in GKE Ingress for .Net Core Api app
NOTE
Inside the bash shell of pod i tried to reach my api on localhost and it responded as shown below: What am i missing ???

Kubernetes, Loadbalancing, and Nginx Ingress - AKS

Stack:
Azure Kubernetes Service
NGINX Ingress Controller - https://github.com/kubernetes/ingress-nginx
AKS Loadbalancer
Docker containers
My goal is to create a K8s cluster that will allow me to use multiple pods, under a single IP, to create a microservice architecture. After working with tons of tutorials and documentation, I'm not having any luck with my endgoal. I got to the point of being able to access a single deployment using the Loadbalancer, but introducing the ingress has not been successful so far. The services are separated into their respective files for readability and ease of control.
Additionally, the Ingress Controller was added to my cluster as described in the installation instructions using: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
LoadBalancer.yml:
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
loadBalancerIP: x.x.x.x
selector:
app: ingress-service
tier: backend
ports:
- name: "default"
port: 80
targetPort: 80
type: LoadBalancer
IngressService.yml:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /api
backend:
serviceName: api-service
servicePort: 80
api-deployment.yml
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api
ports:
- port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
selector:
matchLabels:
app: api
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: api
tier: backend
track: stable
spec:
containers:
- name: api
image: image:tag
ports:
- containerPort: 80
imagePullPolicy: Always
imagePullSecrets:
- name: SECRET
The API in the image is exposed on port 80 correctly.
After applying each of the above yml services and deployments, I attempt a web request to one of the api resources via the LoadBalancer's IP and receive only a timeout on my requests.
Found my answer after hunting around enough. Basically, the problem was that the Ingress Controller has a Load Balancer built into the yaml, as mentioned in comments above. However, the selector for that LoadBalancer requires marking your Ingress service as part of the class. Then that Ingress service points to each of the services attached to your pods. I also had to make a small modification to allow using a static IP in the provided load balancer.

K3s traefik ingress returns gateway timeout

I am currently playing around with a rpi based k3s cluster and I am observing a weird phenomenon.
I deployed two applications.
The first one is nginx which I can reach on the url http://external-ip/foo based on the following ingress rule:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
namespace: foo
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
traefik.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 8081
And the other one is grafana which I cannot reach on the url http://external-ip/grafana based on the below ingress rule:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: grafana
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
traefik.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- http:
paths:
- path: /grafana
backend:
serviceName: grafana-service
servicePort: 3000
When I do a port-forward directly on the pod I can reach the grafana app, when I use the port-forward on the grafana service it also works.
However as soon as I try to reach it through the subpath I will get a gateway timeout.
Does anyone have a guess what I am missing?
Here the deployment and service for the grafana deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: grafana
labels:
app: grafana
tier: frontend
service: monitoring
spec:
selector:
matchLabels:
app: grafana
tier: frontend
template:
metadata:
labels:
app: grafana
tier: frontend
service: monitoring
spec:
containers:
- image: grafana
imagePullPolicy: IfNotPresent
name: grafana
envFrom:
- configMapRef:
name: grafana-config
ports:
- name: frontend
containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: grafana
labels:
app: grafana
tier: frontend
service: monitoring
spec:
selector:
app: grafana
tier: frontend
type: NodePort
ports:
- name: frontend
port: 3000
protocol: TCP
targetPort: 3000
Solution
I had to add the following two parameters to my configmap to make it work:
GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
GF_SERVER_FROM_SUB_PATH=true
As I mentioned in comments grafana is not listening on / like default nginx.
There is related github issue about this, and if you want to make it work you should specify root_url
grafana.ini:
server:
root_url: https://subdomain.example.com/grafana
Specifically take a look at this and this comment.
#tehemaroo add his own solution which include changing root url and sub_path in configmap
I had to add the following two parameters to my configmap to make it work:
GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
GF_SERVER_FROM_SUB_PATH=true
And related documentation about that
To serve Grafana behind a sub path:
Include the sub path at the end of the root_url.
Set serve_from_sub_path to true.
[server]
domain = example.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
serve_from_sub_path = true

Kubernetes/GCE Ingress controller fails

I'm new to Kubernetes and I'm trying to do HTTP Load Balancing on Google Container Engine with TLS (using the included GCE Ingress Controller). The error I have is repeatable even following Google's official tutorial. For readability I summarize the procedure in config.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
name: nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
Then:
kubectl create -f config.yaml
export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nginx)
gcloud compute firewall-rules create allow-130-211-0-0-22 --source-ranges 130.211.0.0/22 --allow tcp:$NODE_PORT
curl <ip_of_load_balancer>
(I removed the tags on the firewall rule so it will apply for all).
But I get a 502 Server Error, which according to the docs means it's likely bootstrapping (but it always stays like this). I can see on the console that the backend is unhealthy.
In the docs, to avoid this one needs:
a firewall rule (which is done above)
Service must respond with 200 (but I tested the nginx image locally and the service via a general Load Balancer, works fine)
So what is the cause of this error and how can I further debug this?
I left the cluster overnight and it is now working. It either seems it takes quite some to bootstrap or something was fixed on the Google Cloud side.