Generating a redirect with traefik ingress on k3s? - kubernetes

I'm running prometheus and grafana under k3s, accessible (respectively) at http://monitoring.internal/prometheus and http://monitoring.internal/grafana. The grafana Ingress object, for example, looks like:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: monitoring.internal
http:
paths:
- path: /grafana
pathType: Prefix
backend:
service:
name: grafana
port:
number: 3000
This works fine, except that if you land at
http://monitoring.internal/, you get a 404 error. I would like
requests for http://monitoring.internal/ to redirect to
http://monitoring.internal/grafana. I could perhaps create another
service that runs something like darkhttpd ... --forward-all http://monitoring.internal/grafana, and create an Ingress object
that would map / to that service, but it seems like there ought to
be a way to do this with Traefik itself.
It looks like I'm running Traefik 2.4.8 locally:
$ kubectl -n kube-system exec -it deployment/traefik -- traefik version
Version: 2.4.8
Codename: livarot
Go version: go1.16.2
Built: 2021-03-23T15:48:39Z
OS/Arch: linux/amd64
I've found this documentation for 1.7 that suggests there is an annotation for exactly this purpose:
traefik.ingress.kubernetes.io/app-root: "/index.html": Redirects
all requests for / to the defined path.
But setting that on the grafana ingress object doesn't appear to have
any impact, and I haven't been able to find similar docs for 2.x
(I've looked around
here, for
example).
What's the right way to set up this sort of redirect?

Since I haven't been able to figure out traefik yet, I thought I'd post my solution here in case anyone else runs into the same situation. I am hoping someone comes along who knows The Right Way to to do this, and if I figure out I'll update this answer.
I added a new deployment that runs darkhttpd as a simple director:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redirector
spec:
replicas: 1
template:
spec:
containers:
- name: redirector
image: docker.io/alpinelinux/darkhttpd
ports:
- containerPort: 8080
args:
- --forward-all
- http://monitoring.internal/grafana
A corresponding Service:
apiVersion: v1
kind: Service
metadata:
name: redirector
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
And the following Ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirector
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: monitoring.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: redirector
port:
number: 8080
These are all deployed with kustomize, which takes care of
adding labels and selectors in the appropriate places. The
kustomization.yaml look like:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
- ingress.yaml
- service.yaml
commonLabels:
component: redirector
With all this in place, requests to http://monitoring.internal/ hit the redirector pod.

Related

GKE Ingress Bad Gateway or Backend not found

I'm creating this issue though there are various answers available but couldn't answer my problem. I'm using GKE Ingress in my example.
I've been using GKE and setup GKE Ingress mere to manage the path for all which we have kept in GCR.
I've created a .net core based API which is has various path , if I expose the deployment with service as type Loadbalancer it works perfectly. Though when I utilize GKE Ingress and setup service and its backend it throws an error like Bad Gateway or sometime backend not found.
.Net core api application exposes following path when i exposed on service as loadbalancer, kindly follow below screenshot for reference:
http://35.202.38.40/api/books
http://35.202.38.40/api/categories
The same way we do have other post api's.
Now I'm stuck when I utilize GKE ingress and setup my ingress.yaml as follows , it doesn't work and throws an error.
Ingress.Yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: gke-ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: hello-world
servicePort: 80
- path: /kube
backend:
serviceName: hello-kubernetes
servicePort: 80
NOTE: my hello-word is exposing a .Net Core API application, kindly don't get confuse with name as "hello-world" for testing.
Service Yaml
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
type: LoadBalancer
selector:
greeting: hello
department: world
ports:
- protocol: TCP
port: 80
targetPort: 80
Deployment Yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
selector:
matchLabels:
greeting: hello
department: world
replicas: 2
template:
metadata:
labels:
greeting: hello
department: world
spec:
containers:
- name: hello
image: gcr.io/gcpone-yuy-123114/oneplus2:631
ports:
- containerPort: 80
Kindly advise how to setup path based routing in GKE Ingress for .Net Core Api app
NOTE
Inside the bash shell of pod i tried to reach my api on localhost and it responded as shown below: What am i missing ???

Ingress - simple fanout configuration not working

I'm using Ubuntu 20.04.2 LTS. I installed microk8s 1.20.6 rev 2143 and experimenting with ingress. I must be missing something - but it doesn't work as I expect it to. I tracked the strange behavior down to the following configuration:
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ubuntu
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my-ubuntu
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
- path: /nginx
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
nginx-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- port: 80
name: http
- port: 443
name: https
type: ClusterIP
selector:
app: nginx
Now,
curl my-ubuntu/ # this returns Welcome page, as expected
curl my-ubuntu/nginx # this returns Welcome page, as expected
curl my-ubuntu/bad-page.html # this returns 404 Not Found, as expected
curl my-ubuntu/nginx/bad-page.html # this returns Welcome page. WHY?
Any request under my-ubuntu/nginx/* returns Welcome page, even when the url is correct and should have returned different content. Did I configure something wrong?
I was able to reproduce the same strange behavior using Docker for Windows + WSL2 + Ubuntu + ingress installed using:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
EDIT
nginx-deployment.yaml I used:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
When I try /nginx/ instead of /nginx like #HarshManvar suggested, I get this behavior:
curl my-ubuntu/ # this returns Welcome page, as expected
curl my-ubuntu/bad-page.html # this returns 404 Not Found, as expected
curl my-ubuntu/nginx # this returns 404 Not Found
curl my-ubuntu/nginx/ # this returns Welcome page
curl my-ubuntu/nginx/bad-page.html # this returns Welcome page
Kubernetes Ingress documentation about Simple fanout also does use /nginx pattern but not working as described above.
https://kubernetes.github.io/ingress-nginx/examples/rewrite/ explains how to use rewrite-target annotation. I was able to make it work with the following ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ubuntu
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: localhost
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
- path: /nginx($|/.*)
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
Each path defines regular expression with ( ), which yields $1, $2, etc., aka. regex capture group variables. Now you put rewrite-target using those variables, and that will be the actual URL that is passed to the service's container that handles the request.
Maybe there is another way, but this is the only way I was able to make it work.

K3s traefik ingress returns gateway timeout

I am currently playing around with a rpi based k3s cluster and I am observing a weird phenomenon.
I deployed two applications.
The first one is nginx which I can reach on the url http://external-ip/foo based on the following ingress rule:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
namespace: foo
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
traefik.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- http:
paths:
- path: /foo
backend:
serviceName: foo-service
servicePort: 8081
And the other one is grafana which I cannot reach on the url http://external-ip/grafana based on the below ingress rule:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: grafana
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/rule-type: "PathPrefixStrip"
traefik.ingress.kubernetes.io/rewrite-target: "/"
spec:
rules:
- http:
paths:
- path: /grafana
backend:
serviceName: grafana-service
servicePort: 3000
When I do a port-forward directly on the pod I can reach the grafana app, when I use the port-forward on the grafana service it also works.
However as soon as I try to reach it through the subpath I will get a gateway timeout.
Does anyone have a guess what I am missing?
Here the deployment and service for the grafana deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: grafana
labels:
app: grafana
tier: frontend
service: monitoring
spec:
selector:
matchLabels:
app: grafana
tier: frontend
template:
metadata:
labels:
app: grafana
tier: frontend
service: monitoring
spec:
containers:
- image: grafana
imagePullPolicy: IfNotPresent
name: grafana
envFrom:
- configMapRef:
name: grafana-config
ports:
- name: frontend
containerPort: 3000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
namespace: grafana
labels:
app: grafana
tier: frontend
service: monitoring
spec:
selector:
app: grafana
tier: frontend
type: NodePort
ports:
- name: frontend
port: 3000
protocol: TCP
targetPort: 3000
Solution
I had to add the following two parameters to my configmap to make it work:
GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
GF_SERVER_FROM_SUB_PATH=true
As I mentioned in comments grafana is not listening on / like default nginx.
There is related github issue about this, and if you want to make it work you should specify root_url
grafana.ini:
server:
root_url: https://subdomain.example.com/grafana
Specifically take a look at this and this comment.
#tehemaroo add his own solution which include changing root url and sub_path in configmap
I had to add the following two parameters to my configmap to make it work:
GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
GF_SERVER_FROM_SUB_PATH=true
And related documentation about that
To serve Grafana behind a sub path:
Include the sub path at the end of the root_url.
Set serve_from_sub_path to true.
[server]
domain = example.com
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana/
serve_from_sub_path = true

How to make harbor reachable behind istio ingress?

I have installed Harbor as follows:
helm install hub harbor/harbor \
--version 1.3.2 \
--namespace tool \
--set expose.ingress.hosts.core=hub.service.example.io \
--set expose.ingress.annotations.'kubernetes\.io/ingress\.class'=istio \
--set expose.ingress.annotations.'cert-manager\.io/cluster-issuer'=letsencrypt-prod \
--set externalURL=https://hub.service.example.io \
--set notary.enabled=false \
--set secretkey=secret \
--set harborAdminPassword=pw
Everything is up and running but the page is not reachable via https://hub.service.example.io. The same problem occurs here Why css and png are not accessible? but how to set wildcard * in Helm?
Update
Istio supports ingress gateway. This for example works without Gateway and VirtualService definition:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
spec:
rules:
- host: "hw.service.example.io"
http:
paths:
- path: "/*"
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
I would say it won't work with ingress and istio.
As mentioned here
Simple ingress specifications, with host, TLS, and exact path based matches will work out of the box without the need for route rules. However, note that the path used in the ingress resource should not have any . characters.
For example, the following ingress resource matches requests for the example.com host, with /helloworld as the URL.
$ kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: example.com
http:
paths:
- path: /helloworld
backend:
serviceName: myservice
servicePort: grpc
EOF
However, the following rules will not work because they use regular expressions in the path and ingress.kubernetes.io annotations:
$ kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: this-will-not-work
annotations:
kubernetes.io/ingress.class: istio
# Ingress annotations other than ingress class will not be honored
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /hello(.*?)world/
backend:
serviceName: myservice
servicePort: grpc
EOF
I assume your hello-world is working because of just 1 annotation which is ingress class.
If you take a look at annotations of harbor here, it might be the problem when you want to use ingress with istio.
but how to set wildcard * in Helm?
Wildcard have nothing to do here. As I mentioned in this answer you can use either wildcard or additional paths, which is done well. Take a look at the ingress paths here.
https://github.com/goharbor/harbor-helm/blob/master/templates/ingress/ingress.yaml#L5
If you look here, they have the path hardcoded to a couple ingress options. Envoy/istio isn't one of them. However, you may be in luck- expose.ingress.controller set to "gce" seems to set the paths the way you need them to be. (I've never used gce, maybe they even use istio?)
Edit- original answer is below. Apparently there is an ingress controller you can enable in istio. There are absolutely no docs on it, but what should I expect?
In your case though, helm is not your problem. istio doesn't use ingress objects, it uses 'Gateways' and 'VirtualServices'. You can't configure an app to use the istio ingress system using kubernetes.io/ingress.class annotations.
(at least, that has been my experience, and I can't find anything to contradict that in their docs, but it is completely possible there is an istio ingress controller tha

Ingress responding with 'default backend - 404' when using GKE

Using the latest Kubernetes version in GCP (1.6.4), I have the following Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myproject
namespace: default
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: "gce"
spec:
rules:
- host: staging.myproject.io
http:
paths:
- path: /poller
backend:
serviceName: poller
servicePort: 8080
Here is my service and deployment:
apiVersion: v1
kind: Service
metadata:
name: poller
labels:
app: poller
tier: backend
role: service
spec:
type: NodePort
selector:
app: poller
tier: backend
role: service
ports:
- port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: poller
spec:
replicas: 1
template:
metadata:
labels:
app: poller
tier: backend
role: service
spec:
containers:
- name: poller
image: gcr.io/myproject-1364/poller:latest
imagePullPolicy: Always
env:
- name: SPRING_PROFILES_ACTIVE
value: staging
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
In my /etc/hosts I have a line like:
35.190.37.148 staging.myproject.io
However, I get default backend - 404 when curling any endpoint on staging.myproject.io:
$ curl staging.myproject.io/poller/cache/status
default backend - 404
I have the exact same configuration working locally inside Minikube, with the only difference being the domain (dev.myproject.io), and that works like a charm.
I have read and tried pretty much everything that I could find, including stuff from here and here and here, but maybe I'm just missing something... any ideas?
It does take 5-10 minutes for an Ingress to actually become usable in GKE. In the meanwhile, you can see responses with status codes 404, 502 and 500.
There is an ingress tutorial here: https://cloud.google.com/container-engine/docs/tutorials/http-balancer I recommend following it. Based on what you pasted, I can say the following:
You use service.Type=NodePort, which is correct.
I am not sure about the ingress.kubernetes.io/rewrite-target annotation, maybe that's the issue.
Make sure your application responds 200 OK to GET / request.
Also I realize you curl http://<ip>/ but your Ingress spec only handles /poller endpoint. So it's normal you get default backend - 404 response while querying /. You didn't configure any backends for / path in your Ingress spec.
If anyone else is facing this problem, check if header Host is correct and matches expected domain.