How to make Traefik compatible with Microk8s - kubernetes

I have a working setup on Minikube with Traefik as ingress controller. I tried to use that setup on Microk8s but Traefik is not able to work and although I can see the Traefik dashboard and it says that everything is working but every time I try to use the ingress urls I face timeout but if I use the endpoint IP of that service (which I can see in the traefik dashboard) I am able to access to that service but not fully. I can have access to IP/service1 but I can't have access to any of its sub urls, IP/service1/sub-service1 not working.
I also tried microk8s.enable ingress but it created an nginx ingress for me and then I disabled it because I want to use traefik.
Do I need to change my configuration so it becomes compatible with Microk8s? If yes how?
I have to mention that I have two ingress files:
traefik-ui.yaml: which contains both the service and ingress for my traefik. I use this service+ingress to access the traefik dashboard and as I mentioned it works
wws-ingress.yaml: is my main ingress which enables the communication with my components inside kubernetes and this is the part that doesn't work.
My yaml files:
traefik-ui.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
wws-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: wws
annotations:
kubernetes.io/ingress.class: "traefik"
traefik.frontend.rule.type: PathPrefixStrip
traefik.frontend.passHostHeader: "true"
traefik.backend.loadbalancer.sticky: "true"
#traefik.ingress.kubernetes.io/rule-type: ReplacePathRegex
traefik.wss.protocol: http
traefik.wss.protocol: https
spec:
rules:
- host: streambridge.local
http:
paths:
- path: /streambridge
backend:
serviceName: streambridge
servicePort: 9999
- path: /dashboard
backend:
serviceName: dashboard
servicePort: 9009
- path: /gateway
backend:
serviceName: gateway
servicePort: 8080
- path: /rdb
backend:
serviceName: rethinkdb
servicePort: 8085
Minikube commands (this works without a problem):
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-ds.yaml
kubectl apply -f traefik-ui.yaml
kubectl apply -f wws-ingress.yaml
And in Microk8s I tried:
microk8s.kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-rbac.yaml
microk8s.kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-ds.yaml
microk8s.kubectl apply -f traefik-ui.yaml
microk8s.kubectl apply -f wws-ingress.yaml

After testing my setup on another machine and seeing that it is working there I found out that something is wrong with my machine and after spending a good amount of time on this with the help of two of my colleagues and trying everything we found out that the problem is related to iptable in my machine and we solved it as described here: https://github.com/ubuntu/microk8s/issues/72

Related

Path Based Routing with Nginx Controller not working

I have been trying my Nginx Path-Based routing to work, however, after spending almost 4 hours, I am failed to understand, why is it not working. I have gone through almost every possible answer on StackOverflow before anyone downgrades my question, but none worked for me.
So here what I did:
I installed nginx-ingress using Helm 3 (https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/) in a separate namespace - nginx-test:
helm install my-release nginx-stable/nginx-ingress
A version of the ingress controller (https://hub.helm.sh/charts/nginx-edge/nginx-ingress):
$ POD_NAME=$(kubectl get pods -l app=nginx-controller-nginx-ingress -o jsonpath='{.items[0].metadata.name}')
$
$ kubectl exec -it $POD_NAME -- /nginx-ingress --version
Version=edge GitCommit=50e908aa
$
There are 2 basic nginx deployments, 2 services already configured in the same namespace, and working fine when I configure host-based routing for them.
Below one works fine for me (when I define host-based routing and get the required page index.html when I run both individual URLs):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nginx1.example.com
http:
paths:
- path: /
backend:
serviceName: nginx1
servicePort: 80
- host: nginx2.example.com
http:
paths:
- path: /
backend:
serviceName: nginx2
servicePort: 80
Now I wanted to achieve the same result using Path-Based routing, where there will be 1 URL and 2 Paths /nginx1 (pointing to nginx1 service) and /nginx2 (pointing to nginx2 service). So I configured the below ingress resource (and many permutations and combinations I applied based on different examples on internet), none of them worked for me.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-path-based
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: nginx.example.com
http:
paths:
- path: /nginx1
backend:
serviceName: nginx1
servicePort: 80
- path: /nginx2
backend:
serviceName: nginx2
servicePort: 80
When I access services directly, it works fine, however when I try to access - curl http://nginx.example.com/nginx1 or curl http://nginx.example.com/nginx2 - I get 404 Not Found error.
I was expecting to receive the same response which I was getting for Host-Based routing. But it does not seem to work.
So finally I had to install the controller using manifests, instead of helm charts (edge version).
I installed it from here (https://kubernetes.github.io/ingress-nginx/deploy/#bare-metal), changed NodePort to LoadBalancer to get a LoadBalancer IP. I am using MetalLB on BareMetal.
$ POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/component=controller -o jsonpath='{.items[0].metadata.name}')
$ kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.32.0
Build: git-446845114
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.10
-------------------------------------------------------------------------------
$
My Ingress resource looks like the same which I posted while asking the question.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-path-based
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: nginx.gofork8s.com
http:
paths:
- path: /nginx1
backend:
serviceName: nginx1
servicePort: 80
- path: /nginx2
backend:
serviceName: nginx2
servicePort: 80
Modified the new LoadBalancer IP in /etc/hosts file to get the domain work.
192.168.0.1 nginx.example.com
Now I am able to access - http://nginx.example.com/nginx1 and http://nginx.example.com/nginx2.
I hope it will help someone. I still need to figure out settings with Helm Charts.

How to make harbor reachable behind istio ingress?

I have installed Harbor as follows:
helm install hub harbor/harbor \
--version 1.3.2 \
--namespace tool \
--set expose.ingress.hosts.core=hub.service.example.io \
--set expose.ingress.annotations.'kubernetes\.io/ingress\.class'=istio \
--set expose.ingress.annotations.'cert-manager\.io/cluster-issuer'=letsencrypt-prod \
--set externalURL=https://hub.service.example.io \
--set notary.enabled=false \
--set secretkey=secret \
--set harborAdminPassword=pw
Everything is up and running but the page is not reachable via https://hub.service.example.io. The same problem occurs here Why css and png are not accessible? but how to set wildcard * in Helm?
Update
Istio supports ingress gateway. This for example works without Gateway and VirtualService definition:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: helloworld-ingress
spec:
rules:
- host: "hw.service.example.io"
http:
paths:
- path: "/*"
backend:
serviceName: hello-kubernetes-first
servicePort: 80
---
I would say it won't work with ingress and istio.
As mentioned here
Simple ingress specifications, with host, TLS, and exact path based matches will work out of the box without the need for route rules. However, note that the path used in the ingress resource should not have any . characters.
For example, the following ingress resource matches requests for the example.com host, with /helloworld as the URL.
$ kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
rules:
- host: example.com
http:
paths:
- path: /helloworld
backend:
serviceName: myservice
servicePort: grpc
EOF
However, the following rules will not work because they use regular expressions in the path and ingress.kubernetes.io annotations:
$ kubectl create -f - <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: this-will-not-work
annotations:
kubernetes.io/ingress.class: istio
# Ingress annotations other than ingress class will not be honored
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /hello(.*?)world/
backend:
serviceName: myservice
servicePort: grpc
EOF
I assume your hello-world is working because of just 1 annotation which is ingress class.
If you take a look at annotations of harbor here, it might be the problem when you want to use ingress with istio.
but how to set wildcard * in Helm?
Wildcard have nothing to do here. As I mentioned in this answer you can use either wildcard or additional paths, which is done well. Take a look at the ingress paths here.
https://github.com/goharbor/harbor-helm/blob/master/templates/ingress/ingress.yaml#L5
If you look here, they have the path hardcoded to a couple ingress options. Envoy/istio isn't one of them. However, you may be in luck- expose.ingress.controller set to "gce" seems to set the paths the way you need them to be. (I've never used gce, maybe they even use istio?)
Edit- original answer is below. Apparently there is an ingress controller you can enable in istio. There are absolutely no docs on it, but what should I expect?
In your case though, helm is not your problem. istio doesn't use ingress objects, it uses 'Gateways' and 'VirtualServices'. You can't configure an app to use the istio ingress system using kubernetes.io/ingress.class annotations.
(at least, that has been my experience, and I can't find anything to contradict that in their docs, but it is completely possible there is an istio ingress controller tha

AKS: IP whitelisting (ingress)

I am trying to whitelist IP(s) on the ingress in the AKS. I am currently using the ingress-nginx not installed with Helm.
The mandatory kubernetes resources can be found here
The service is started as:
spec:
externalTrafficPolicy: Local
Full yaml here
My ingress definition is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
# namespace: ingress-nginx
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/whitelist-source-range: "xxx.xxx.xxx.xxx"
spec:
rules:
- http:
paths:
- path: /xx-xx
backend:
serviceName: xx-xx
servicePort: 8080
- path: /xx
backend:
serviceName: /xx
servicePort: 5432
The IP whitelisting is not enforced. Am I doing something wrong ?
After a lot of digging around I found that the problem is because of this bug in NATing, defined here and there is quick medium read here.
Hope this solves problems for future readers or help track the bug

kubernetes: access Ingress within a Pod

I have an Ingress object set up to route traffic to the appropriate Service based on the Url path. I would like to access/expose this Ingress object within another Pod. I'm wondering if this is possible?
I tried to set up a Service on the Ingress but that didn't seem to work.
So, for whatever reason (ssr, lots of microservices, etc) you want to access k8s resources using their ingress path mapping, instead of calling each service by its internal name.
For example, you have an ingress config like that:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: example.com
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-service
servicePort: 80
- path: /api/cart/?(.*)
backend:
serviceName: cart-service
servicePort: 80
and you want to access auth-service using http://example.com/api/auth instead of http://auth-service.
All you have to do is replace domain part (example.com in our case) with ingress service url. It depends on your configuration and environment, but usually it looks like http://[SERVICE_NAME].[NAMESPACE], for example:
GCP - http://ingress-nginx-controller.ingress-nginx
Helm ingress nginx - http://my-release-ingress-nginx-controller (here we are
using only service name part, because helm installs ingress in
default namespace)
Minikube - if you are using minikube ingress
addon, then you might run into problem where you cannot access
ingress, then just use helm version. (dont disable ingress addon - just install helm version alongside of it)
Get namespaces: kubectl get namespaces
Get service names inside namespace kubectl get services -n [NAMESPACE].
If you have assigned a host name, you also have to provide the domain name and IP address of the cluster to the /etc/hosts file. When you access a service via Ingress from outside the cluster, this is the file that is consulted for host name resolution.
However, a pod running inside a cluster does not have access to this /etc/hosts file. It has its own /etc/hosts file. To use ingress, the pod needs to have the same domain name and IP address entry in it's own /etc/hosts file.
To achieve this, you have to use hostAliases. Here's a sample of how that works:
apiVersion: v1
kind: Pod
metadata:
...
spec:
hostAliases:
- ip:<IP address>
hostnames:
- <host name>
For more detail on hostAliases, go to this link
I have spend so much time on this issue. I found very simple solution. I am using Mac Docker Desktop 3.3.1.
My Kubernetes Version: 1.19.7
I am trying to access UI URL from another pod running in the cluster.
My UI Service
apiVersion: v1
kind: Service
metadata:
name: my-ui-service
spec:
type: LoadBalancer
selector:
app: my-ui
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Ingress for the service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: my-site.com
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: my-ui-service
port:
number: 8080
I have used NGINX Ingress Controller.
Command to run the Ingress Controller:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
Once the controller is ready run the command to see the status of ingress.
kubectl get ingress
Now see the description of the ingress:
kubectl describe ingress my-ingress
Here you will find
Rules:
Host Path Backends
---- ---- --------
my-site.com
/ my-ui-service:8080 (10.1.2.198:8080)
In any pod in the cluster you can access the domain my-site.com by using my-ui-service:8080.
Inside your cluster your pods use services to reach other pods.
From outside the cluster a client may use ingress to reach services.
Ingress resource allows connection to services.
So your pod need to be reachable by a service (my-svc-N in the following example), which you're going to use in your ingress definition.
Take a look at this example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: default-http-backend
servicePort: 80
rules:
- host: my-kube.info
http:
paths:
- path: /
backend:
serviceName: my-svc-1
servicePort: 80
- host: cheeses.all
http:
paths:
- path: /aaa
backend:
serviceName: my-svc-2
servicePort: 80
- path: /bbb
backend:
serviceName: my-svc-3
servicePort: 80

Defining a fallback service for Kubernetes ingress

Is it possible to have a fallback service for Kubernetes ingresses in the event that none of the normal pods are live/ready? In other words, how would you go about presenting a friendly "website down" page to visitors if all pods crashed or went down somehow?
Right now, a page appears that says "default backend - 404" if that happens.
Here's what we tried, to no avail:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
backend:
serviceName: website-down-service
servicePort: 80
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: example-service
servicePort: 80
For reference, we're testing locally with Minikube and deploying to the cloud on Google's Container Engine.
If using Nginx then default backend annotation should do the trick, sample:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-name
namespace: your-namespace
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/default-backend: fallback-backend
spec:
<your spec here>
For the Nginx Ingress Controller there is a flag --default-backend-service, which currently points to the service showing the "default backend - 404" message. Just replace it with the service you want. See https://github.com/kubernetes/ingress/tree/master/controllers/nginx#command-line-arguments
If you're using another Ingress Controller, I expect it to have a similar option.