How to set up a custom HTTP error in Kubernetes - kubernetes

I want to create a custom 403 error page.
Currently I already have an Ingress created and in the annotations I have something like this:
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,88.100.01.01"
So any attempt to access my web app outside that IP range receives a 403 error.
In order to create a custom page I tried adding the following annotations:
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend"
where default-http-backend is the name of an app already deployed.
the ingress has this:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "my-app-ingress",
"namespace": "my-app-test",
"selfLink": "/apis/extensions/v1beta1/namespaces/my-app-test/ingresses/my-app-ingress",
"uid": "8f31f2b4-428d-11ea-b15a-ee0dcf00d5a8",
"resourceVersion": "129105581",
"generation": 3,
"creationTimestamp": "2020-01-29T11:50:34Z",
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend",
"nginx.ingress.kubernetes.io/rewrite-target": "/",
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,90.108.01.012"
}
},
"spec": {
"tls": [
{
"hosts": [
"my-app-test.retail-azure.js-devops.co.uk"
],
"secretName": "ssl-secret"
}
],
"rules": [
{
"host": "my-app-test.retail-azure.js-devops.co.uk",
"http": {
"paths": [
{
"path": "/api",
"backend": {
"serviceName": "my-app-backend",
"servicePort": 80
}
},
{
"path": "/",
"backend": {
"serviceName": "my-app-frontend",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
Yet I always get the default 403.
What am I missing?

I've reproduced your scenario and that worked for me.
I will try to guide you in steps I've followed.
Cloud provider: GKE
Kubernetes Version: v1.15.3
Namespace: default
I'm using 2 deployments of 2 images with a service for each one.
Service 1: default-http-backend - with nginx image, it will be our default backend.
Service 2: custom-http-backend - with inanimate/echo-server image, this service will be displayed if the request become from a whitelisted ip.
Ingress: Nginx ingress with annotations.
Expected behavior: The ingress will be configured to use default-backend, custom-http-errors and whitelist-source-range annotations. If the request was made from a whitelisted ip the ingress will redirect to custom-http-backend, if not it will be redirect to default-http-backend.
Deployment 1: default-http-backend
Create a file default-http-backend.yaml with this content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
Apply the yaml file: k apply -f default-http-backend.yaml
Deployment 2: custom-http-backend
Create a file custom-http-backend.yaml with this content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Apply the yaml file: k apply -f custom-http-backend.yaml
Check if services is up and running
I'm using the alias k for kubectl
➜ ~ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
custom-http-backend ClusterIP 10.125.5.227 <none> 80/TCP 73s
default-http-backend ClusterIP 10.125.9.218 <none> 80/TCP 5m41s
...
➜ ~ k get pods
NAME READY STATUS RESTARTS AGE
custom-http-backend-67844fb65d-k2mwl 1/1 Running 0 2m10s
default-http-backend-5485f569bd-fkd6f 1/1 Running 0 6m39s
...
You could test the service using port-forward:
default-http-backend
k port-forward svc/default-http-backend 8080:80
Try to access http://localhost:8080 in your browse to see the nginx default page.
custom-http-backend
k port-forward svc/custom-http-backend 8080:80
Try to access http://localhost:8080 in your browse to see the custom page provided by the echo-server image.
Ingress configuration
At this point we have both services up and running, we need to install and configure the nginx ingress. You can follow the official documentation, this will not covered here.
After installed let's deploy the ingress, based in the code you posted i did some modifications: tls removed, added other domain and removed the path /api for tests purposes only and add my home ip to whitelist.
Create a file my-app-ingress.yaml with the content:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 207.34.xxx.xx/32
spec:
rules:
- host: myapp.rabello.me
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
Apply the spec: k apply -f my-app-ingress.yaml
Check the ingress with the command:
➜ ~ k get ing
NAME HOSTS ADDRESS PORTS AGE
my-app-ingress myapp.rabello.me 146.148.xx.xxx 80 36m
That's all!
If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, the nginx default page is displayed.
Note I'm using ingress and services in the same namespace, if you need work with different namespace you need to use ExternalName.
I hope that helps!
References:
kubernetes deployments
kubernetes service
nginx ingress
nginx annotations

I want to create a custom 403 error page. Currently I already have an Ingress created and in the annotations.
So any attempt to access my web app outside that IP range receives a 403 error.
In order to create a custom page I tried adding the following annotations:
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 125.10.156.36/32
spec:
rules:
- host: venkat.dev.vboffice.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
where default-http-backend is the name of an app already deployed with default nginx page.
If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, it will display default backend 404
i need to add any nginx config change custom-http-backend pod????
Deployment 1:default-http-backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
Deployment 2: custom-http-backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080

One can customize the 403 error page for ingress-nginx (/etc/nginx/template), just by editing the nginx.tmpl file. Then mounting it to ingress nginx controller deployment.Below is the part of the nginx.tmpl need to be edited:
{{/* Build server redirects (from/to www) */}}
{{ range $redirect := .RedirectServers }}
## start server {{ $redirect.From }}
server {
server_name {{ $redirect.From }};
{{ buildHTTPListener $all $redirect.From }}
{{ buildHTTPSListener $all $redirect.From }}
ssl_certificate_by_lua_block {
certificate.call()
}
error_page 403 /403.html;
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
location = /403.html {
root /usr/local/nginx/html/;
internal;
}
set_by_lua_block $redirect_to {
local request_uri = ngx.var.request_uri
if string.sub(request_uri, -1) == "/" then
request_uri = string.sub(request_uri, 1, -2)
end
{{ if ne $all.ListenPorts.HTTPS 443 }}
{{ $redirect_port := (printf ":%v" $all.ListenPorts.HTTPS) }}
return string.format("%s://%s%s%s", ngx.var.scheme, "{{ $redirect.To }}", "{{ $redirect_port }}", request_uri)
{{ else }}
return string.format("%s://%s%s", ngx.var.scheme, "{{ $redirect.To }}", request_uri)
{{ end }}
}
return {{ $all.Cfg.HTTPRedirectCode }} $redirect_to;
}
## end server {{ $redirect.From }}
{{ end }}
{{ range $server := $servers }}
## start server {{ $server.Hostname }}
server {
server_name {{ buildServerName $server.Hostname }} {{range $server.Aliases }}{{ . }} {{ end }};
error_page 403 /403.html;
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
location = /403.html {
root /usr/local/nginx/html/;
internal;
}
{{ template "SERVER" serverConfig $all $server }}
{{ if not (empty $cfg.ServerSnippet) }}
# Custom code snippet configured in the configuration configmap
{{ $cfg.ServerSnippet }}
{{ end }}
{{ template "CUSTOM_ERRORS" (buildCustomErrorDeps "upstream-default-backend" $cfg.CustomHTTPErrors $all.EnableMetrics) }}
}
## end server {{ $server.Hostname }}
{{ end }}
In the above snippet error_apge 403 /403.html; is declared before we return 403. Then the location of /403.html is defined. The root path is same where one should mount the 403.html page. In this case its /usr/local/nginx/html/.
Below snippet will help you mount the volume with custom pages.
volumes:
- name: custom-errors
configMap:
# Provide the name of the ConfigMap you want to mount.
name: custom-ingress-pages
items:
- key: "404.html"
path: "404.html"
- key: "403.html"
path: "403.html"
- key: "50x.html"
path: "50x.html"
- key: "index.html"
path: "index.html"
This solution doesn't require you to spawn another/extra service or pod of any kind to work.
For more info: https://engineering.zenduty.com/blog/2022/03/02/customizing-error-pages

You need to create and deploy custom default backend which will return a custom error page.Follow the doc to deploy a custom default backend and configure nginx ingress controller by modifying the deployment yaml to use this custom default backend.
The deployment yaml for the custom default backend is here and the source code is here.

Related

Error: connect ECONNREFUSED 127.0.0.1:443 http://ingress-nginx-controller.ingress-nginx.svc.cluster.local

kubectl get namespace
default Active 3h33m
ingress-nginx Active 3h11m
kube-node-lease Active 3h33m
kube-public Active 3h33m
kube-system Active 3h33m
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.102.205.190 localhost 80:31378/TCP,443:31888/TCP 3h12m
ingress-nginx-controller-admission ClusterIP 10.103.97.209 <none> 443/TCP 3h12m
When I am making the request from nextjs getInitialProps http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser then its throwing an error Error: connect ECONNREFUSED 127.0.0.1:443.
LandingPage.getInitialProps = async () => {
if (typeof window === "undefined") {
const { data } = await axios.get(
"http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser",
{
headers: {
Host: "ticketing.dev",
},
}
);
return data;
} else {
const { data } = await axios.get("/api/users/currentuser");
return data;
}
};
My auth.deply.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: sajeebxn/auth
env:
- name: MONGO_URI
value: 'mongodb://tickets-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
And my ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- ticketing.dev
# secretName: e-ticket-secret
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
Try using http://ingress-nginx-controller.ingress-nginx/api/users/currentuser.
This worked for me

Output to a sequence add a newline when working with map objects

I want to create several Service definitions using a defined range on my values.yaml file.
values.yaml
services:
type: ClusterIP
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: http
port: 80
protocol: TCP
targetPort: 8080
Creating several definitions work fine but the ports part always have a newline when starting the sequence.
template services.yaml
{{- $top := . -}}
{{- range $key, $val := .Values.services.ports -}}
apiVersion: v1
kind: Service
metadata:
name: {{ printf "%s-%s" ($.Values.fullnameOverride) ($val.name) }}
labels:
app: {{ printf "%s-%s" ($.Values.fullnameOverride) ($val.name) }}
spec:
type: {{ $.Values.services.type }}
ports:
- {{ toYaml $val | trim | nindent 4 }}
selector:
app: {{ $.Values.fullnameOverride }}
---
{{- end }}
But when I execute the above template I get the following output:
---
# Source: arichnettools/templates/services.yaml
apiVersion: v1
kind: Service
metadata:
name: arichnettools-https
labels:
app: arichnettools-https
spec:
type: ClusterIP
ports:
-
name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: arichnettools
---
# Source: arichnettools/templates/services.yaml
apiVersion: v1
kind: Service
metadata:
name: arichnettools-http
labels:
app: arichnettools-http
spec:
type: ClusterIP
ports:
-
name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: arichnettools
---
I do not know how to get rid of the newline after ports: I am not sure if that will give me an error during deployment, but it just drive me crazy. I have tried as well with indent but no luck so far.
This new line after ports: shouldn't cause any errors, but you can easily get rid of it with just one modification to your template.
Instead of:
ports:
- {{ toYaml $val | trim | nindent 4 }}
Try to use:
ports:
- {{ toYaml $val | indent 4 | trim }}
We can see that after the modification it works as expected:
$ cat templates/services.yaml
{{- $top := . -}}
{{- range $key, $val := .Values.services.ports -}}
apiVersion: v1
kind: Service
metadata:
name: {{ printf "%s-%s" ($.Values.fullnameOverride) ($val.name) }}
labels:
app: {{ printf "%s-%s" ($.Values.fullnameOverride) ($val.name) }}
spec:
type: {{ $.Values.services.type }}
ports:
- {{ toYaml $val | indent 4 | trim }}
selector:
app: {{ $.Values.fullnameOverride }}
---
{{- end }}
$ helm template test .
---
# Source: mychart/templates/services.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-test-https
labels:
app: svc-test-https
spec:
type: ClusterIP
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: svc-test
---
# Source: mychart/templates/services.yaml
apiVersion: v1
kind: Service
metadata:
name: svc-test-http
labels:
app: svc-test-http
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: svc-test

Problem Sub Path Ingress Controller for Backend Service

I have problem setting path ingress controller for backend service. For example i want setup :
frontend app with angular (Path :/)
backend service with NodeJs (Path :/webservice).
NodeJS : Index.js
const express = require('express')
const app = express()
const port = 4000
app.get('/', (req, res) => res.send('Welcome to myApp!'))
app.use('/data/office', require('./roffice'));
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Another Route:roffice.js
var express = require('express')
var router = express.Router()
router.get('/getOffice', async function (req, res) {
res.send('Get Data Office')
});
module.exports = router
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-stack
spec:
selector:
matchLabels:
run: ws-stack
replicas: 2
template:
metadata:
labels:
run: ws-stack
spec:
containers:
- name: ws-stack
image: wsstack/node/img
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4000
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: service-wsstack
labels:
run: service-wsstack
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
nodePort: 30009
targetPort: 4000
selector:
run: ws-stack
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice
backend:
serviceName: service-wsstack --> backend
servicePort: 80
i setup deployment, service and ingress successfully. but when i called with curl
curl http://<minikubeip>/webservice --> Welcome to myApp! => Correct
curl http://<minikubeip>/webservice/data/office/getOffice --> Welcome to myApp! => Not correct
if i called another route, the result is the same 'Welcome to myApp'. But if i used Nodeport
curl http://<minikubeip>:30009/data/office/getOffice => 'Get Data Office', working properly.
What is the problem? any solution? Thank you
TL;DR
nginx.ingress.kubernetes.io/rewrite-target: /$2
path: /webservice($|/)(.*)
Explanation
The problem is from that line in your ingress:
nginx.ingress.kubernetes.io/rewrite-target: /
You're telling nginx to rewrite your url to / whatever it matched.
/webservice => /
/webservice/data/office/getOffice => /
To do what you're trying to do use regex, here is a simple example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice($|/)(.*)
backend:
serviceName: service-wsstack --> backend
servicePort: 80
This way you're asking nginx to rewrite your url with the second matching group.
Finally it gives you:
/webservice => /
/webservice/data/office/getOffice => /data/office/getOffice

How to fix "bad certificate error" in traefik 2.0?

I'm setting up traefik 2.0-alpha with Let's Encrypt certificates inside GKE, but now i'm in stupor with "server.go:3012: http: TLS handshake error from 10.32.0.1:2244: remote error: tls: bad certificate" error in container logs.
Connections via http working fine. When i try to connect via https, traefik return 404 with its own default certificates.
I found same problem for traefik v1 on github. Solution was in adding to config:
InsecureSkipVerify = true
passHostHeader = true
It doesn't help me.
Here is my configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-ingress-configmap
namespace: kube-system
data:
traefik.toml: |
[Global]
sendAnonymousUsage = true
debug = true
logLevel = "DEBUG"
[ServersTransport]
InsecureSkipVerify = true
[entrypoints]
[entrypoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[entrypoints.mongo-port]
address = ":11111"
[providers]
[providers.file]
[tcp] # YAY!
[tcp.routers]
[tcp.routers.everything-to-mongo]
entrypoints = ["mongo-port"]
rule = "HostSNI(`*`)" # Catches every request
service = "database"
[tcp.services]
[tcp.services.database.LoadBalancer]
[[tcp.services.database.LoadBalancer.servers]]
address = "mongodb-service.default.svc:11111"
[http]
[http.routers]
[http.routers.for-jupyterx-https]
entryPoints = ["web-secure"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.routers.for-jupyterx.tls]
[http.routers.for-jupyterx-http]
entryPoints = ["web"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.services]
[http.services.jupyterx.LoadBalancer]
PassHostHeader = true
# InsecureSkipVerify = true
[[http.services.jupyterx.LoadBalancer.servers]]
url = "http://jupyter-service.default.svc/"
weight = 100
[acme] # every router with TLS enabled will now be able to use ACME for its certificates
email = "account#mail.com"
storage = "acme.json"
# onHostRule = true # dynamic generation based on the Host() & HostSNI() matchers
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint = "web" # used during the challenge
And DaemonSet yaml:
# ---
# apiVersion: v1
# kind: ServiceAccount
# metadata:
# name: traefik-ingress-controller
# namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
volumes:
# - name: traefik-ui-tls-cert
# secret:
# secretName: traefik-ui-tls-cert
- name: traefik-ingress-configmap
configMap:
name: traefik-ingress-configmap
containers:
- image: traefik:2.0 # The official v2.0 Traefik docker image
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: web-secure
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
- name: mongodb
containerPort: 11111
volumeMounts:
- mountPath: "/config"
name: "traefik-ingress-configmap"
args:
- --api
- --configfile=/config/traefik.toml
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: web-secure
- protocol: TCP
port: 8080
name: admin
- port: 11111
protocol: TCP
name: mongodb
type: LoadBalancer
loadBalancerIP: 1.1.1.1
Have any suggestions, how to fix it?
Due to lack of manuals for traefik2.0-alpha, config file was written using only manual from traefik official page.
There is a "routers for HTTP & HTTPS" configuration example here https://docs.traefik.io/v2.0/routing/routers/ look like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
But working config looks like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1-https.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
So, in my config string
[http.routers.for-jupyterx.tls]
should be changed on
[http.routers.for-jupyterx-https.tls]

istio load balancing of a single service with multiple versions

I was able to achieve load-balancing with sample istio applications
https://github.com/piomin/sample-istio-services
https://istio.io/docs/guides/bookinfo/
But was not able to get istio load-balancing working with single private service having 2 versions. Example: 2 consul servers with different versions .
Service and pod definition :
apiVersion: v1
kind: Service
metadata:
name: consul-test
labels:
app: test
spec:
ports:
- port: 8500
name: http
selector:
app: test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v1
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v1
spec:
containers:
- name: consul-test-v1
image: consul:latest
ports:
- containerPort: 8500
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v2
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v2
spec:
containers:
- name: consul-test-v2
image: consul:1.1.0
ports:
- containerPort: 8500
Gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: con-gateway
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
exact: /catalog
route:
- destination:
host: consul-test
port:
number: 8500
Routing rules in virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: consul-test
spec:
hosts:
- consul-test
gateways:
- con-gateway
- mesh
http:
- route:
- destination:
host: consul-test
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: consul-test
spec:
host: consul-test
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Though I route all traffic ( http requests ) to consul server version v1, my http requests on consul-service lands on v1 and v2 alternately i.e, it follows Round-Robin rule .
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "ebfa341b-4557-a392-9f8a-8ee307113faa",
"Node": "consul-test-v1-765dd566dd-6cmj9",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
}
]
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "1b60a5bd-9a17-ff18-3a65-0ff95b3a836a",
"Node": "consul-test-v2-fffd475bc-st4mv",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 5,
"ModifyIndex": 6
}
]
I have the above mentioned issue when curl is done on the service ClusterIP:ClusterPort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
But LoadBalancing works as expected when curl is done on INGRESS_HOST and INGRESS_PORT ( determining INGRESS_HOST and INGRESS_PORT present here )
$ curl -L http://$INGRESS_HOST:$INGRESS_PORT/v1/catalog/nodes --- WORKS