kubernetes ingress-nginx ignore special characters in path - kubernetes

I'm trying to have a rule listening to a specific path containing a dollar sign like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: metadata-ingress
annotations:
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-regex: "false"
spec:
ingressClassName: public
tls:
- hosts:
- mydomain.com
rules:
- host: mydomain.com
http:
paths:
- path: /api/v2/$metadata
pathType: Prefix
backend:
service:
name: busybox
port:
number: 8280
I don't want any url rewrite or anything fancy, just want this specific path to be caught and forwarded to this service.
Without the "$" it works.
I thought disabling regex with use-regex: "false" would fix it, but no.
I also tried using the url encoded value for $ : %24metadata but it doesn't help either.
I also tried to use "exact" instead of "prefix" as the pathType but no.

I can't reproduce your problem, but I thought I walk through my test setup and you can tell me if anything is different. For the purpose of testing different paths, I have two deployments using the traefik/whoami image (this just provides a useful endpoint that shows us -- among other things -- the hostname and path involved in the request).
That looks like:
apiVersion: v1
kind: Service
metadata:
labels:
app: example
component: app1
name: example-app1
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
component: app1
---
apiVersion: v1
kind: Service
metadata:
labels:
app: example
component: app2
name: example-app2
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
component: app2
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
component: app1
name: example-app1
spec:
selector:
matchLabels:
app: example
component: app1
template:
metadata:
labels:
app: example
component: app1
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: whoami
ports:
- containerPort: 80
name: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
component: app2
name: example-app2
spec:
selector:
matchLabels:
app: example
component: app2
template:
metadata:
labels:
app: example
component: app2
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: whoami
ports:
- containerPort: 80
name: http
I've also deployed the following Ingress resource, which looks mostly like yours, except I've added a second paths config so that we can compare requests that match /api/v2/$metadata vs those that do not:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: house
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
name: example
spec:
ingressClassName: nginx
rules:
- host: example.apps.infra.house
http:
paths:
- backend:
service:
name: example-app1
port:
name: http
path: /
pathType: Prefix
- backend:
service:
name: example-app2
port:
name: http
path: /api/v2/$metadata
pathType: Prefix
tls:
- hosts:
- example.apps.infra.house
secretName: example-cert
With these resources in place, a request to https://example.apps.infra.house/ goes to app1:
$ curl -s https://example.apps.infra.house/ | grep Hostname
Hostname: example-app1-596fcf48bd-dqhvc
Whereas a request to https://example.apps.infra.house/api/v2/$metadata goes to app2:
$ curl -s https://example.apps.infra.house/api/v2/\$metadata | grep Hostname
Hostname: example-app2-8675dc9b45-6hg7l
So that all seems to work.
We can, if we are so inclined, examine the nginx configuration that results from that Ingress. On my system, the nginx ingress controller runs in the nginx-ingress namespace:
$ kubectl -n nginx-ingress get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx-controller 1/1 1 1 8d
The configuration lives in /etc/nginx/nginx.conf in the container. We can cat the file to stdout and look for the relevant directives:
$ kubectl -n nginx-ingress exec deploy/ingress-nginx-controller cat /etc/nginx/nginx.conf
...
location /api/v2/$metadata/ {
...
}
...
Based on your comment, the following seems to work:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$2"
cert-manager.io/cluster-issuer: house
spec:
ingressClassName: nginx
tls:
- hosts:
- example.apps.infra.house
secretName: example-cert
rules:
- host: example.apps.infra.house
http:
paths:
- path: /app1(/|$)(.*)
pathType: Prefix
backend:
service:
name: example-app1
port:
name: http
# Note the use of single quotes (') here; this is
# important; using double quotes we would need to
# write `\\$` instead of `\$`.
- path: '/api/v2/\$metadata'
pathType: Prefix
backend:
service:
name: example-app2
port:
name: http
The resulting location directives look like:
location ~* "^/api/v2/\$metadata" {
...
}
location ~* "^/app1(/|$)(.*)" {
...
}
And a request for the $metadata path succeeds.

Related

EKS + ALB Ingress not working on differents manifest

I'm trying to use the same ALB for multiple services, but when I define a new entry or rule in a manifest other than the main one, it is not added to the AWS ALB.
The only rules that appear are the ones I created in the alb-ingress.yaml manifest, the rules from app.yaml don't appear.
ALB print: https://prnt.sc/IfpfbHGvkkFi
alb-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
namespace: dev
annotations:
alb.ingress.kubernetes.io/load-balancer-name: eks-test
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/group.name: "alb-dev"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-path: /health
spec:
ingressClassName: alb
rules:
- host: ""
http:
paths:
- path: /status
pathType: Prefix
backend:
service:
name: status
port:
number: 80
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: dev
spec:
template:
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: k8s.gcr.io/e2e-test-images/echoserver:2.5
ports:
- containerPort: 80
replicas: 1
selector:
matchLabels:
app: app
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: dev
spec:
ports:
- port: 80
protocol: TCP
type: NodePort
selector:
app: app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: dev
annotations:
alb.ingress.kubernetes.io/load-balancer-name: eks-test
alb.ingress.kubernetes.io/group.name: "alb-triercloud-dev"
alb.ingress.kubernetes.io/group.order: '1'
spec:
ingressClassName: alb
rules:
- host: service-a.devopsbyexample.io
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: echoserver
port:
number: 80
Each ingress is bound to the ALB, so you need to modify your ingress resource to add a path.
See here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/how-it-works/#ingress-creation

Kubernetes nginx-ingress controller always 401 http response

I'm researching kubernetes and trying to configure nginx-ingress controller. So I created yaml config file for it, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-service
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: command-clusterip-service
port:
number: 80
And created relevant services like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: yuriyborovskyi91/platformsservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-service
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
And added acme.com to windows hosts file, like:
127.0.0.1 acme.com
But when trying to access http://acme.com/api/platforms or any other api route, I'm receiving 401 http error, and confused by it, because I didn't configure any authorization. Using all default settings. If to call my service by nodeport everything works fine.
Output of my kubectl get services, where service I'm trying to access running:
and response of kubectl get services --namespace=ingress-nginx

Why a path /dev(/|$)(.*) and a rewrite annotation aren't enough for a redirect in the Nginx ingress?

When I try to use a path different from / and a rewrite annotation in an ingress service I get an "error timeout" in browser. I need to be able to access my front with something like example.com/dev and the frontend's pod needs to receive requests to /.
I use Azure K8s 1.22.6 and Nginx ingress 4.1.0.
Here's my resources:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: ealen/echo-server
ports:
- containerPort: 80
imagePullPolicy: Always
And ingress. This configuration works and I left lines which I'm needed commented:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-fa
# annotations:
# nginx.ingress.kubernetes.io/use-regex: "true"
#nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
#- path: /dev(/|$)(.*)
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
If I use the rewrite annotation $2 and the path "/dev(/|$)(.*)" then I get "ERR_CONNECTION_TIMED_OUT". What I'm missing?
the problem was in the frontend vue application - I've added
publicPath: '/dev/',
into the vue.config.js
ps I've skipped the echo service

How can I point my Nginx instance at the ClusterIP service for another pod?

I'm trying to configure my Kubernetes application so that my frontend application can speak to my backend, which is running on another deployment and is exposed via a ClusterIP service.
Currently, the frontend application is serving up some static content through Nginx. The configuration for that server is located inside a mounted configuration. I've got the / route serving up my static content to users, and I'd like to configure another route in the server block to point to my backend, at /api but I'm not sure how to direct that at the ClusterIP service for my other deployment.
The full frontend deployment file looks like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-conf
data:
nginx.conf: |
## To make changes to the configuration
## You use the kubectl rollout restart nginx command.
events {}
http {
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/extra-conf.d/*.conf;
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
#location /api {
##
## Send traffic to my API, running on another Kubernetes deployment, here...
## }
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: mydockerusername/ts-react
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: nginx-conf
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
readOnly: true
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
My backend API is exposed via a ClusterIP Service on PORT 1234, and looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: typeorm
spec:
selector:
matchLabels:
app: typeorm # Find and manage all the apps with this label
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: typeorm # Create apps with this label
spec:
containers:
- image: mydockerusername/typeorm
imagePullPolicy: Always
name: typeorm
ports:
- containerPort: 1234
env:
- name: ENV
value: "production"
envFrom:
- secretRef:
name: typeorm-config
---
apiVersion: v1
kind: Service
metadata:
name: typeorm
labels:
app: typeorm
spec:
type: ClusterIP
ports:
- port: 1234
targetPort: 1234
selector:
app: typeorm
You can't expose your ClusterIP service through nginx config file here as ClusterIP service is only available inside kubernetes. You need an nginx ingress controller and ingress component to expose your ClusterIP service to outside world.
You can use ingress component to expose your ClusterIP service to /api path.
Your ingress manifest file will look like below.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
spec:
rules:
- host: foo.bar.com #your server address here
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: typeorm
port:
number: 1234
Even you can use just one ingress component to expose your both frontend and backend. But for that you need another Service pointing to that frontend deployment.Then your manifest file you look like something like below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 1234

Kubernetes routing to specific pod in function of a param in url

The need I have just looks like this stuff :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: http
spec:
serviceName: "nginx-set"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: http
----
apiVersion: v1
kind: Service
metadata:
name: nginx-set
labels:
app: nginx
spec:
ports:
- port: 80
name: http
clusterIP: None
selector:
app: nginx
Here is the interesting part :
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/testPath'
backend:
hostNames:
- web-0
serviceName: nginx-set #! There is no extra service. This
servicePort: '80' # is the Statefulset's Headless Service
I'm able to target a specific pod because of setting the hostName in function of the url.
Now I'd like to know if it's possible in kubernetes to create a rule like that
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/connect/(\d+)'
backend:
hostNames:
- web-(result of regex match with \d+)
serviceName: nginx-set #! There is no extra service. This
servicePort: '80' # is the Statefulset's Headless Service
or if I have to wrote a rule for each pod ?
Sorry that isn't possible, the best solution is to create multiple paths, each one referencing one pod:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/connect/0'
backend:
hostNames:
- web-0
serviceName: nginx-set
servicePort: '80'
- path: '/connect/1'
backend:
hostNames:
- web-1
serviceName: nginx-set
servicePort: '80'