Nginx 400 Bad Request in Digital Ocean Kubernetes environment - kubernetes

The domain configured is ticket.devaibhav.live
ping ticket.devaibhav.live is pointing to the correct IP address of the load balancer provisioned by Digital Ocean. I haven't configured SSL on the cluster yet, but if I try to access my website http://ticket.devaibhav.live gives an 400 bad request. I am new to kubernetes and networking inside a cluster.
According to my understanding, when browser sends request to http://ticket.devaibhav.live the request is sent to the Digital Ocean Load balancer and then the ingress service (Ingress-nginx by kubernetes in my case) routes the traffic based on the rules I have defined.
ingress-nginx service
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: 'true'
service.beta.kubernetes.io/do-loadbalancer-hostname: 'ticket.devaibhav.live'
labels:
helm.sh/chart: ingress-nginx-2.0.3
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.32.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
ingress resource rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: ticket.devaibhav.live
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /api/tickets/?(.*)
pathType: Prefix
backend:
service:
name: tickets-srv
port:
number: 3000
- path: /api/orders/?(.*)
pathType: Prefix
backend:
service:
name: orders-srv
port:
number: 3000
- path: /api/payments/?(.*)
pathType: Prefix
backend:
service:
name: payments-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
essentially when I hit http://ticket.devaibhav.live the request should be mapped to the last rule where it must be routed to client-srv.
client deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-depl
spec:
replicas: 1
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
containers:
- name: client
image: vaibhav908/client
---
apiVersion: v1
kind: Service
metadata:
name: client-srv
spec:
selector:
app: client
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
The above configuration works well on the development server where I am using minikube.
I am unable to understand where I am going wrong with the configuration. I will provide more details as I feel it would be necessary.
[edit]
on the cluster that is deployed
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-srv ClusterIP 10.245.100.25 <none> 3000/TCP 2d17h
and some other services
kubectl describe ingress
Name: ingress-service
Labels: <none>
Namespace: default
Address: ticket.devaibhav.live
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ticket.devaibhav.live
/api/users/?(.*) auth-srv:3000 (10.244.1.76:3000)
/api/tickets/?(.*) tickets-srv:3000 (10.244.0.145:3000)
/api/orders/?(.*) orders-srv:3000 (10.244.1.121:3000)
/api/payments/?(.*) payments-srv:3000 (10.244.1.48:3000)
/?(.*) client-srv:3000 (10.244.1.32:3000)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>

Related

Digital Ocean Kubernetes one load balancer for two different namespaces

I have an small kubernetes (3 small nodes) on Digital Ocean. I already configured production an it is working, but for development purposes I want to create a second environment (staging).
I have been working on and almost everything is working, but I can't get the network to work.
For example, I own the domain example.com
production will run on www.example.com and api.example.com. Then I have staging on a different kubernetes namespace staging-www.example.com and staging-api.example.com.
The only thing that this two environments are sharing are the load balancer. The issue I am facing is I cant go to staging-www.example.com and staging-api.example.com. I misconfigured something, I am trying to reach where is the misconfiguration
These two files are the ones I have for staging / ingress / network. As I wrote, for production is working, but there is something wrong for staging and I couldn't get where is the problem
apiVersion: v1
kind: Service
metadata:
annotations:
kubernetes.digitalocean.com/load-balancer-id: "8ce613ac-a217-4537-b6ce-d26b0eef0939"
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
service.beta.kubernetes.io/do-loadbalancer-hostname: "example.com"
labels:
helm.sh/chart: ingress-nginx-4.1.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: staging
name: example-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- staging-www.example.com
- staging-api.example.com
secretName: example-tls
defaultBackend:
service:
name: examplecatalogo-svc
port:
number: 80
rules:
- host: staging-www.example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: examplecatalogo-svc
port:
number: 80
- host: staging-api.example.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: examplenodeapi-svc
port:
number: 80

503 Service Temporarily Unavailable - nginx, minikube, k8s

Hello I am new to devops
Problem: Unable to access the ticketing.dev from browser (configured using nginx)
I am using nginx and use minikube (running everything locally)
this is my service and deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: arshad/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
type: NodePort
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
this is my ingress file
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-serv
port:
number: 3000
this is my
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> ticketing.dev 192.168.99.101 80 45m
I also added the /etc/hosts ip like ticketing.dev 192.168.99.101 but still I am getting 503 Service Temporarily Unavailable
Anyone please help.
Hi you have a typo thats why. Your service name is auth-srv when in ingress you are calling service name auth-serv . Change it on ingress to auth-srv instead of auth-serv .

Azure Kubernetes Nginx Ingress: How do I properly route to an API service and an Nginx web server with HTTPS and avoid 502?

I have 2 services, one serves up a rest API and the other serves up static content via nginx web server.
I can retrieve the static content from the pod running an nginx web server via the ingress controller using https provided that I don't use the following annotation within the ingress yaml
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
However, the backend API service no longer works. If I add that annotation back, the backend service URL https://fqdn/restservices/engine-rest/v1/api works but the front end https://fqdn/ web server throws a 502.
Ingress
Ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
- path: /
backend:
serviceName: b
servicePort: 8011
Service API
kind: Service
apiVersion: v1
metadata:
name: a
namespace: namespace-abc
labels:
app: a
version: 1
spec:
ports:
- name: https
protocol: TCP
port: 80
targetPort: 8080
nodePort: 31019
selector:
app: a
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Cluster
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
Service UI
kind: Service
apiVersion: v1
metadata:
name: b
namespace: namespace-abc
labels:
app: b
version: 1
annotations:
spec:
ports:
- name: http
protocol: TCP
port: 8011
targetPort: 8011
nodePort: 32620
selector:
app: b
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
If your problem is that adding nginx.ingress.kubernetes.io/backend-protocol: HTTPS makes service-A work but fails service-B, and removing it makes service-A fail but works for service-B, then the solution is to create two different Ingress objects so they can be annotated independently
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-a
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-b
namespace: namespace-abc
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: b
servicePort: 8011

Traefik Ingress not routing traffic

I have deployed the kubernetes cluster on vagrant machine with config as:
one master and two worker nodes.
Two services are deployed with named as nodeport-svc-rc and nodeport-svc-rs
Services config:
# nodeport-svc-rc
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-rc
spec:
type: NodePort
ports:
- port: 5001
targetPort: 5001
nodePort: 30001
selector:
app: controller
# nodeport-svc-rs
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-rs
spec:
type: NodePort
ports:
- port: 5002
targetPort: 5002
nodePort: 30002
selector:
app: controller-rs
Ingress Config:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: example.com
http:
paths:
- path: /demo
backend:
serviceName: nodeport-svc-rc
servicePort: 5001
- path: /demof
backend:
serviceName: nodeport-svc-rs
servicePort: 5002
Traefik is able to detect the ingress resource on its dashboard as backends services:
But no Frontends have been detected on dashboard and no IP address are detected on Backends.
Entry in /etc/hosts file: XXX.XXX.X.X example.com
I'm unable to route traffic using ingress. If i hit from browser example.com/demo, error shows Site can't be reached where i'm wrong? can someone help me.
# sudo kubectl describe ing
Name: ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
example.com
/demo nodeport-svc-rc:5001 (10.244.171.95:5001,10.244.171.96:5001,10.244.235.150:5001)
/demof nodeport-svc-rs:5002 (10.244.171.98:5002,10.244.235.157:5002,10.244.235.158:5002)
Annotations: ingress.kubernetes.io/auth-secret: mysecret
ingress.kubernetes.io/auth-type: basic
kubernetes.io/ingress.class: traefik
Events: <none>
And when i hit directly on nodePort service example.com:30001 or example.com:30002 it successfully give response.
Edited: Below Traefik controller config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7.26-alpine
name: traefik-ingress-lb
args:
- --web
- --kubernetes

Get client IP address in GRPC service behind Kubernetes nginx ingress

I am still struggling with kubernetes.
I have issue with preserving request IP address on service for logging purposes. Logging is done with GRPC server. This code is working outside kubernetes as intended.
Service is defined similar to this.
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
name: grpc-api
name: grpc-api
namespace: myns
spec:
ports:
- name: ext-5000
port: 5000
targetPort: 5000
- name: grpc-5050
port: 5050
targetPort: 5050
selector:
name: grpc-api
type: ClusterIP
Ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-myns
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
labels:
name: api-grpc
name: api-grpc
namespace: myns
spec:
rules:
- host: api.example.org
http:
paths:
- backend:
serviceName: grpc-api
servicePort: 5000
path: /
tls:
- hosts:
- api.example.org
secretName: grpc-api-ingress-cert
Documentation mentions externalTrafficPolicy: Local in service, where type is LoadBalancer. Would it be enough to add parameter above to ClusterIP type service or do I have to change it to something else?
Thank you in advance.