Azure Kubernetes Nginx Ingress: How do I properly route to an API service and an Nginx web server with HTTPS and avoid 502? - kubernetes

I have 2 services, one serves up a rest API and the other serves up static content via nginx web server.
I can retrieve the static content from the pod running an nginx web server via the ingress controller using https provided that I don't use the following annotation within the ingress yaml
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
However, the backend API service no longer works. If I add that annotation back, the backend service URL https://fqdn/restservices/engine-rest/v1/api works but the front end https://fqdn/ web server throws a 502.
Ingress
Ingress
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
- path: /
backend:
serviceName: b
servicePort: 8011
Service API
kind: Service
apiVersion: v1
metadata:
name: a
namespace: namespace-abc
labels:
app: a
version: 1
spec:
ports:
- name: https
protocol: TCP
port: 80
targetPort: 8080
nodePort: 31019
selector:
app: a
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: ClientIP
externalTrafficPolicy: Cluster
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
Service UI
kind: Service
apiVersion: v1
metadata:
name: b
namespace: namespace-abc
labels:
app: b
version: 1
annotations:
spec:
ports:
- name: http
protocol: TCP
port: 8011
targetPort: 8011
nodePort: 32620
selector:
app: b
version: 1
clusterIP: <cluster ip>
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster

If your problem is that adding nginx.ingress.kubernetes.io/backend-protocol: HTTPS makes service-A work but fails service-B, and removing it makes service-A fail but works for service-B, then the solution is to create two different Ingress objects so they can be annotated independently
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-a
namespace: namespace-abc
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
spec:
rules:
- http:
paths:
- path: /restservices/engine-rest/v1
backend:
serviceName: a
servicePort: 8080
---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-b
namespace: namespace-abc
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: b
servicePort: 8011

Related

Blazor server through k8s ingress controller

I've written a small Blazor application which looks to work all well when containerized and when accessed through k3s port forwarding, but struggling to find a guideline on how that application needs to be correctly exposed via a ingress controller. To show this:
If I run the Blazor application and access via port-forwarding (blazor routing works perfectly well etc.):
kubectl port-forward deployment/ 8000:80
and page routing working as expected
However, when I add a clusterIP service to the deployment and connect to it through Traefik ingress controller, I get:
and changing the route will give a 404 page not found error
My Ingress serviceIp and ingress controller setup:
ClusterIP:
apiVersion: v1
kind: Service
metadata:
name: driverpassthrough
spec:
selector:
app: driverpassthrough
ports:
- name: ui
protocol: TCP
port: 8010
targetPort: 80
ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-test
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /passthrough
backend:
serviceName: driverpassthrough
servicePort: 8010
so in my case, I use k3s and traefik. I also have 3 replicas of my blazor server app. To make it work, I had to enable sticky session (in annotations) on the cluster ip like so:
Service
apiVersion: v1
kind: Service
metadata:
name: qscale-healthcheck-service
annotations:
traefik.ingress.kubernetes.io/affinity: "true"
labels:
name: qscale-healthcheck-service
spec:
type: ClusterIP
selector:
app: healthcheck
ports:
- name: http
port: 80
targetPort: 80
Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: qscale-healthcheck-service
servicePort: 80
This is the link where I found the annotation: Traefik Doc

Get client IP address in GRPC service behind Kubernetes nginx ingress

I am still struggling with kubernetes.
I have issue with preserving request IP address on service for logging purposes. Logging is done with GRPC server. This code is working outside kubernetes as intended.
Service is defined similar to this.
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
name: grpc-api
name: grpc-api
namespace: myns
spec:
ports:
- name: ext-5000
port: 5000
targetPort: 5000
- name: grpc-5050
port: 5050
targetPort: 5050
selector:
name: grpc-api
type: ClusterIP
Ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-myns
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: GRPC
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
labels:
name: api-grpc
name: api-grpc
namespace: myns
spec:
rules:
- host: api.example.org
http:
paths:
- backend:
serviceName: grpc-api
servicePort: 5000
path: /
tls:
- hosts:
- api.example.org
secretName: grpc-api-ingress-cert
Documentation mentions externalTrafficPolicy: Local in service, where type is LoadBalancer. Would it be enough to add parameter above to ClusterIP type service or do I have to change it to something else?
Thank you in advance.

Why isn't my ingress controller route traffic from my external IP to the service in my cluster?

I have a cluster on Digital Ocean. 1 master with 2 nodes.
I'm using the Nginx Controller with the Digital Ocean Load Balancer.
Three Items in my Ingress Service Work fine. The fourth where I use Nginx Doesn't.
Does anyone know why?
Here are the configs. I left out the Ingress Services for the 1st through 3rd Deployments that are working. I can add them if needed.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: hw1.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-first
servicePort: 80
- host: hw2.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-second
servicePort: 80
- host: hw3.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-third
servicePort: 80
- host: hw4.example.com
http:
paths:
- backend:
serviceName: hello-kubernetes-fourth
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-fourth
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-fourth
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-fourth
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-fourth
template:
metadata:
labels:
app: hello-kubernetes-fourth
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 8080
First of all ,if you use the nginx image, the container Port should be the one exposed by the Dockerfile of the image.
For the nginx image the ContainerPort should be port 80
https://hub.docker.com/layers/nginx/library/nginx/stable/images/sha256-cab8e4374e1e32bac58a8c7ae96c252cadcb1049545ed4bb3e3bfd5d087983b9
Now you should test if the nginx is available by accessing the podip:containerPort from the minikube node:
kubectl get po -o wide
hello-kubernetes-fourth-cb4fb668c-7tkd4 1/1 Running 0 25m 172.17.0.12
curl 172.17.0.12
After this, you should modify the ports of the service : targetPort should match the containerPort (80) and port 8080
Now access the Nginx by service URL:
kubectl describe svc hello-kubernetes-fourth
curl ClusterIP:8080
If OK, modify also the servicePort of the ingress to match the Service Port. Don't forget to enable the ingress, as it is disabled by default on minikube:
minikube addons enable ingress
* ingress was successfully enabled
after ingress pod is up, and adding in your host machine the MINIKUBEIP hw4.example.com in your /etc/hosts, you should be able to curl from the host machine:
curl hw4.example.com
The deployment configuration is incorrect. update the YAMLs as shown below
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-fourth
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
selector:
app: hello-kubernetes-fourth
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-fourth
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-fourth
template:
metadata:
labels:
app: hello-kubernetes-fourth
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
get inside nginx pod and verify that you are able to reach the service and getting the response
curl http://hello-kubernetes-fourth
Then you should be able to reach the service from ingress.

Error in implementing WebSocket in Kubernetes

We are trying to implement WebSocket in Kubernetes by following steps given in GCP document "https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#support_for_websocket" & "https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service" . but we are getting "Error during WebSocket handshake: Unexpected response code: 502”.
Error Message : “WebSocket connection to 'wss://..../backend-channeladaptor-web/socket.io/?EIO=3&transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 502”.
I have also attached Files created for gke implementation:
Backend-channeladaptor-web.yaml
Deployment content
Service content
Backendconfig content
Converse-ingress.yaml (It has details of other services as well you can ignore that except backend-channeladaptor-web).
Converse-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: converse-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: $Static-Ip-Name
kubernetes.io/ingress.allow-http: "false"
spec:
tls:
- secretName: $SSL-certificate
rules:
- host: $HostName
http:
paths:
- path: /*
backend:
serviceName: frontend-chat-client
servicePort: 3040
- path: /socket-io/*
backend:
serviceName: backend-channeladaptor-engineerportal
servicePort: 11009
- path: /login/*
backend:
serviceName: frontend-engineeringportal
servicePort: 3021
- path: /frontend-engineeringportal/*
backend:
serviceName: frontend-engineeringportal
servicePort: 3021
- path: /backend-channeladaptor-web/*
backend:
serviceName: backend-channeladaptor-web
servicePort: 11006
Backend-channeladaptor-web.yaml
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-channeladaptor-web-backendconfig
spec:
timeoutSec: 3600
connectionDraining:
drainingTimeoutSec: 3600
---
apiVersion: v1
kind: Service
metadata:
name: backend-channeladaptor-web
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"8081":"backend-channeladaptor-web-backendconfig"}}'
spec:
type: NodePort
ports:
- port: 8081
targetPort: 11006
protocol: TCP
nodePort: 30078
name: http
selector:
app: backend-channeladaptor-web
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: backend-channeladaptor-web
spec:
selector:
matchLabels:
app: backend-channeladaptor-web
replicas: 1
template:
metadata:
labels:
app: backend-channeladaptor-web
spec:
containers:
- image: gcr.io/acn-careful-granite-240620/backend-channeladaptor-web:0.2
name: backend-channeladaptor-web
ports:
- name: http
containerPort: 11006
hostPort: 11006
env:
- name: NODE_ENV
value: dev
I expect the response status code 101 but getting 502 Bad Gateway
It looks like your ingress has the URI /backend-channeladaptor-web/* pointed to the Service backend-channeladaptor-web and is expecting the Service to be listening in 11006. However, the NodePort configuration is listening in 8081.
The confusion might come from the targetPort directive, pointing to 11006 in the actual backend (the Deployment).
This is causing 502's, meaning that althought you're reaching an intermediary (the load balancer in your case), it is unable to reach the backend (the Deployment, served by the NodePort Service).
You can change the ingress definition to point to 8081 instead, matching the current NodePort configuration.

How do I make an ingress forward to an ssl port(443) if https traffic

How does an ingress forward https traffic to port 443 of the service(eventually to 8443 on my container)? Do I have to make any changes to my ingress or is this done automatically.
On GCP, I have a layer 4 balancer -> nginx-ingress controller -> ingress
My ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-keycloak
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- mysite.com
secretName: staging-iam-tls
rules:
- host: mysite.com
http:
paths:
- path: /auth
backend:
serviceName: keycloak-http
servicePort: 80
I searched online but I don't see explicit examples of hitting 443. It's always 80(or 8080)
My service keycloak-http is(elided and my container is actually listening at 8443)
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-05-15T12:45:58Z
labels:
app: keycloak
chart: keycloak-4.12.0
heritage: Tiller
release: keycloak
name: keycloak-http
namespace: default
..
spec:
clusterIP: ..
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: keycloak
release: keycloak
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
Try this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-keycloak
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- mysite.com
secretName: staging-iam-tls
rules:
- host: mysite.com
http:
paths:
- path: /auth
backend:
serviceName: keycloak-http
servicePort: 443