Traefik Ingress bad rule - kubernetes

I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global

Related

How To Enable SSL on Google Kubernetes Engine while using ingress-nginx?

I am using GKE with ingress-nginx (https://kubernetes.github.io/ingress-nginx/). I tried many tutorials using cert-manager but was unable to learn it.
Could you give me a yaml file as an example if you are able to get SSL working with ingress-nginx in google kubernetes engine?
You can use this as a starting point and expand on it
apiVersion: apps/v1
kind: Deployment
metadata:
name: arecord-depl
spec:
replicas: 1
selector:
matchLabels:
app: arecord
template:
metadata:
labels:
app: arecord
spec:
containers:
- name: arecord
image: gcr.io/clear-shell-346807/arecord
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
---
apiVersion: v1
kind: Service
metadata:
name: arecord-srv
spec:
selector:
app: arecord
ports:
- name: arecord
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: ssl-ip
spec:
tls:
- hosts:
- vareniyam.me
secretName: echo-tls
rules:
- host: vareniyam.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: arecord-srv
port:
number:
8080
You have said you're using nginx ingress, but your ingress class is saying gce:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
You have not indicated which ClusterIssuer or Issuer you want to use. cert-manager issues certificates only after you tell it you want it to create a certificate
I am unsure what tutorials you have tried, but have you tried looking at the cert-manager docs here: https://cert-manager.io/docs/

Kubernetes ingress not routing

I have 2 services and deployments deployed on minikube on local dev. Both are accessible when I run minikube start service. For the sake of simplicity I have attached code with only one service
However, ingress routing is not working
CoffeeApiDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffeeapi-deployment
labels:
app: coffeeapi
spec:
replicas: 1
selector:
matchLabels:
app: coffeeapi
template:
metadata:
labels:
app: coffeeapi
spec:
containers:
- name: coffeeapi
image: manigupta31286/coffeeapi:latest
env:
- name: ASPNETCORE_URLS
value: "http://+"
- name: ASPNETCORE_ENVIRONMENT
value: "Development"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffeeapi-service
spec:
selector:
app: coffeeapi
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30036
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Prefix
backend:
service:
name: coffeeapi-service
port:
number: 8080
You are missing the ingress class in the spec.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
ingressClassName: nginx # (or the class you configured)
Using NodePort on your service may also be problematic. At least it's not required since you want to use the ingress controller to route traffic via the ClusterIP and not use the NodePort directly.

Why am I getting 502 errors on my ALB end points, targeted at EKS hosted services

I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.

Kubernetes Ingress does not work with traefisk

I created a kubernetes cluster in Google Cloud Platform, after that, I installed Helm/tiller on cluster, and after, I installed traefik with helm like oficial documentation says to do.
Now i'm trying to create an Ingress for a service, but if I put the annotation kubernetes.io/ingress.class: traefik, the load balancer for Ingress is not created.
But without the annotation, it works with default Ingress.
(The service type is nodeport)
EDIT: I also tried this example in a clean google cloud kubernetes cluster: https://supergiant.io/blog/using-traefik-as-ingress-controller-for-your-kubernetes-cluster/ but its the same, when I chose kubernetes.io/ingress.class: traefik, won't be created a load balancer for ingress.
my files are:
animals-svc.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: bear
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: bear
---
apiVersion: v1
kind: Service
metadata:
name: moose
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: moose
---
apiVersion: v1
kind: Service
metadata:
name: hare
annotations:
traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: hare
animals-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: animals
annotations:
kubernetes.io/ingress.class: traefik
# kubernetes.io/ingress.global-static-ip-name: "my-reserved-global-ip"
# traefik.ingress.kubernetes.io/frontend-entry-points: http
# traefik.ingress.kubernetes.io/redirect-entry-point: http
# traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: hare.minikube
http:
paths:
- path: /
backend:
serviceName: hare
servicePort: http
- host: bear.minikube
http:
paths:
- path: /
backend:
serviceName: bear
servicePort: http
- host: moose.minikube
http:
paths:
- path: /
backend:
serviceName: moose
servicePort: http
animals-deployment.yaml:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: bear
labels:
app: animals
animal: bear
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: bear
template:
metadata:
labels:
app: animals
task: bear
version: v0.0.1
spec:
containers:
- name: bear
image: supergiantkir/animals:bear
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: moose
labels:
app: animals
animal: moose
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: moose
template:
metadata:
labels:
app: animals
task: moose
version: v0.0.1
spec:
containers:
- name: moose
image: supergiantkir/animals:moose
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: hare
labels:
app: animals
animal: hare
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: hare
template:
metadata:
labels:
app: animals
task: hare
version: v0.0.1
spec:
containers:
- name: hare
image: supergiantkir/animals:hare
ports:
- containerPort: 80
The services are created, but the ingress loadbalancer is not created:
But, if I remove the line kubernetes.io/ingress.class: traefik it works with the default ingress of Kubernetes
Traefik does not create a load balancer for you by default.
As HTTP(s) load balancing with Ingress documentation mention:
When you create an Ingress object, the GKE ingress controller creates
a Google Cloud Platform HTTP(S) load balancer and configures it
according to the information in the Ingress and its associated
Services.
This is all applicable for GKE ingress controller(gce) - more info about gce you can find here: https://github.com/kubernetes/ingress-gce
If you would like to use Traefik as ingress - you have to expose Traefik service with type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 80
More info with a lot of explanation diagrams and real working example you can find in the Exposing Kubernetes Services to the internet using Traefik Ingress Controller article.
Hope this help.
You can try to add more annotations as below
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
Like this,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-dashboard-ingress
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: traefik-ui.example.com
http:
paths:
- path: /
backend:
serviceName: traefik-dashboard
servicePort: 8080

Kubernetes nginx ingress + oauth2 external auth timing out

I am attempting to protect a service's status page with an oauth2_proxy, using Azure AD as the external auth provider. Currently if I browse to the public url of the app (https://sub.domain.com/service/hangfire) I got a 504 gateway timeout, where it should be directing me to authenticate.
I had been mostly following this guide for reference: https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/
If I disable the annotations that direct the authentication, I can get to the public status page without a problem. If I browse to https://sub.domain.com/oauth2, I get a prompt to authenticate with my provider, which I would expect. I am not sure where the issue lies in the ingress config but I was unable to find any similar cases to this online, stackoverflow or otherwise.
In this case, everything (oauth deployment, service, and ingress rules) lives in a 'dev' namespace except the actual ingress deployment, which lives in its own namespace. I don't suspect this makes a difference, but SSL termination is handled by a gateway outside the cluster.
oauth2 deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.2.0
imagePullPolicy: IfNotPresent
args:
- --provider=azure
- --email-domain=domain.com
- --upstream=http://servicename
- --http-address=0.0.0.0:4180
- --azure-tenant=id
- --client-id=id
- --client-secret=number
env:
- name: OAUTH2_PROXY_COOKIE_SECRET
value: secret
ports:
- containerPort: 4180
protocol : TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
Ingress rules:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: https://sub.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: https://sub.domain.com/oauth2/start?rd=$https://sub.domain.com/service/hangfire"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /service/hangfire
backend:
serviceName: service
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
I am getting 504 errors when I browse to the url but I do not see any errors in the ingress pods.
I ended up finding the resolution here: https://github.com/helm/charts/issues/5958
I had to use the internal service address for the auth-url, which I had not seen mentioned anywhere else.
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.development.svc.cluster.local:4180/oauth2/auth
This is what I've been doing with my oAuth proxy for Azure AD:
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
And I've been using this oAuth proxy:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: xxx
- name: OAUTH2_PROXY_CLIENT_ID
value: yyy
- name: OAUTH2_PROXY_CLIENT_SECRET
value: zzz
- name: OAUTH2_PROXY_COOKIE_SECRET
value: anyrandomstring
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "http://where_to_redirect_to:443"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
My setup is similar to 4c74356b41's
oauth2-proxy deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --azure-tenant=TENANT-GUID
- --email-domain=company.com
- --http-address=0.0.0.0:4180
- --provider=azure
- --upstream=file:///dev/null
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: client-id
name: oauth2-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client-secret
name: oauth2-proxy
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: cookie-secret
name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.1.0
name: oauth2-proxy
oauth2-proxy service
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: oauth2-proxy
type: ClusterIP
oauth2-proxy ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
oauth2-proxy configuration
apiVersion: v1
kind: Secret
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
data:
# Values below are fake
client-id: AAD_CLIENT_ID
client-secret: AAD_CLIENT_SECRET
cookie-secret: COOKIE_SECRET
Application using AAD Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
labels:
app: myapp
name: myapp
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
tls:
- hosts:
- myapp.hostname.net
An additional step that needs to be done is to add the redirect URI to the AAD App registration. Navigate to your AAD App Registration in the Azure portal > Authentication > Add https://myapp.hostname.net/oauth2/callback to Redirect URIs > Save