I have a Kubernetes cluster (release 1.23.4) working using Traefik(release 2.7.0).
I would like to access the kubernetes dashboard through IngressRoute Traefix, everything seems to work correctly no error in the log of the Traefik pod and the dashboard but when i want to access the kubernetes dashboard it can not access the page: https://k8sdash.kub.techlabnews.com/api/v1/login/status and I have an error 404.(log in the firefox console).
Use this code for create the IngressRoute :
apiVersion: traefik.containo.us/v1alpha1
kind: ServersTransport
metadata:
name: kubernetes-dashboard-transport
namespace: kubernetes-dashboard
spec:
serverName: "k8sdash.kub.techlabnews.com"
insecureSkipVerify: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`k8sdash.kub.techlabnews.com`)
services:
- kind: Service
port: 443
name: kubernetes-dashboard
namespace: kubernetes-dashboard
serversTransport: kubernetes-dashboard-transport
tls:
secretName: kub.techlabnews-com-cert-secret-replica
Does anyone have an idea of the problem ?
Thanks
Related
We are using Traefik v2 running in kubernetes in a shared namespace (called shared), with multiple namespaces for different projects/services. We are utilising the IngressRoute CRD along with middlewares.
We need to mirror (duplicate) all incoming traffic to a specific URL (blah.example.com/newservice) and forward it to 2 backend services in 2 different namespaces. Because they are separated between 2 namespaces, they are running as the same name, with the same port.
I've looked at the following link, but don't seem to understand it:
https://doc.traefik.io/traefik/v2.3/routing/providers/kubernetes-crd/#mirroring
This is my config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: shared-ingressroute
namespace: shared
spec:
entryPoints: []
routes:
- kind: Rule
match: Host(`blah.example.com`) && PathPrefix(`/newservice/`)
middlewares:
- name: shared-middleware-testing-middleware
namespace: shared
priority: 0
services:
- kind: Service
name: customer-mirror
namespace: namespace1
port: TraefikService
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: shared-middleware-testing-middleware
namespace: shared
spec:
stripPrefix:
prefixes:
- /newservice/
---
apiVersion: traefik.containo.us/v1alpha1
kind: TraefikService
metadata:
name: customer-mirror
namespace: namespace1
spec:
mirroring:
name: newservice
port: 8011
namespace: namespace1
mirrors:
- name: newservice
port: 8011
percent: 100
namespace: namespace2
What am I doing wrong?
based on docs, for Your case kind should be TraefikService
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: shared-ingressroute
namespace: shared
spec:
entryPoints: []
routes:
- kind: Rule
match: Host(`blah.example.com`) && PathPrefix(`/newservice/`)
middlewares:
- name: shared-middleware-testing-middleware
namespace: shared
services:
- kind: TraefikService
name: customer-mirror
namespace: namespace1
I know there are many similar posts. But given the slight differences between everyone's environment, I have not found a solution that has worked for me. I am trying to access the Traefik dashboard running on bare metal (pi cluster) k3s. I am using the default LB in k3s.
Other ingress provider resources have worked like ingress to the Pihole dashboard for example. When I try to access the dashboard via: https://www.traefik.localhost/dashboard/ I get an unable to connect error. I have traefik.localhost in /etc/hosts pointing to one of the LB ingress IP's, in this case .104
In theory I think the request should be gobbled up by the LB service on the respective node, forwarded to the traefik service, if the entrypoint is open (80) in this case. The Traefik service should look at the providers available, find the ingressRoute I've made, and match the hostname. Then forward the request to the service api#internal. I do not know how to check if that service is running properly or not, which would be the last step in my debugging process if I knew how.
Here is the Traefik service:
kubectl describe service -n kube-system traefik
Name: traefik
Namespace: kube-system
Labels: app.kubernetes.io/instance=traefik
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=traefik
helm.sh/chart=traefik-10.3.0
Annotations: meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
Selector: app.kubernetes.io/instance=traefik,app.kubernetes.io/name=traefik
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.226.223
IPs: 10.43.226.223
LoadBalancer Ingress: 192.168.4.101, 192.168.4.102, 192.168.4.103, 192.168.4.104, 192.168.4.105
Port: web 80/TCP
TargetPort: web/TCP
NodePort: web 30690/TCP
Endpoints: 10.42.4.88:8000
Port: websecure 443/TCP
TargetPort: websecure/TCP
NodePort: websecure 30328/TCP
Endpoints: 10.42.4.88:8443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Here is the IngressRoute:
kubectl describe ingressRoute -n kube-system dashboard
Name: dashboard
Namespace: kube-system
Labels: <none>
Annotations: <none>
API Version: traefik.containo.us/v1alpha1
Kind: IngressRoute
Metadata:
Creation Timestamp: 2022-01-18T03:42:49Z
Generation: 9
Managed Fields:
API Version: traefik.containo.us/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:entryPoints:
f:routes:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2022-01-23T16:46:30Z
Resource Version: 628002
UID: b96eb707-b1a9-4a6c-b94f-a8b975b4120b
Spec:
Entry Points:
web
Routes:
Kind: Rule
Match: Host(`traefik.localhost`) && PathPrefix(`/`)
Services:
Kind: TraefikService
Name: api#internal
Events: <none>
Dynamic config:
cat traefik.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik-crd
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-crd-10.3.0.tgz
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-10.3.0.tgz
api:
insecure: true
set:
global.systemDefaultRegistry: ""
valuesContent: |-
rbac:
enabled: true
ports:
websecure:
tls:
enabled: true
podAnnotations:
prometheus.io/port: "8082"
prometheus.io/scrape: "true"
providers:
kubernetesCRD:
kubernetesIngress:
publishedService:
enabled: true
priorityClassName: "system-cluster-critical"
image:
name: "rancher/mirrored-library-traefik"
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/control-plane"
operator: "Exists"
effect: "NoSchedule"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
helm chart config to modify the above helm chart:
cat traefik-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
api:
insecure: true
dashboard: true
What can I try to solve this?
I also have k3s with the preinstalled traefik and default loadbalancer.
This works for me.
If you implement this, you only need to type traefik.localhost in your browser and it should get you into your dashboard. No need to add /dashboard to your url.
The middleware will convert your http request into a https request.
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: redirectscheme
namespace: default
spec:
redirectScheme:
scheme: https
permanent: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dash-http
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`traefik.localhost`)
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: redirectscheme
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dash-https
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`traefik.localhost`)
kind: Rule
services:
- name: api#internal
kind: TraefikService
I've follow the documentation about how to enable IAP on GKE.
I've:
configured the consent screen
Create OAuth credentials
Add the universal redirect URL
Add myself as IAP-secured Web App User
And write my deployment like this:
data:
client_id: <my_id>
client_secret: <my_secret>
kind: Secret
metadata:
name: backend-iap-secret
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 443
protocol: TCP
targetPort: 3000
selector:
k8s-app: grafana
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: grafana
spec:
containers:
- env:
- name: GF_SERVER_HTTP_PORT
value: "3000"
image: docker.io/grafana/grafana:6.7.1
name: grafana
ports:
- containerPort: 3000
protocol: TCP
readinessProbe:
httpGet:
path: /api/health
port: 3000
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
---
apiVersion: networking.gke.io/v1beta1
kind: ManagedCertificate
metadata:
name: monitoring-tls
spec:
domains:
- monitoring.foo.com
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
spec:
backend:
serviceName: grafana
servicePort: 443
When I look at my ingress I've this:
$ k describe ingress
Name: grafana
[...]
Annotations: beta.cloud.google.com/backend-config: {"default": "backend-config-iap"}
ingress.kubernetes.io/backends: {"k8s-blabla":"HEALTHY"}
[...]
Events: <none>
$
I can connect to the web page without any problem, the grafana is up and running, but I can also connect without being authenticated (witch is a problem).
So everything look fine, but IAP is not activated, why ?
The worst is that, if I enable it manualy it work but if I redo kubectl apply -f monitoring.yaml IAP is disabled.
What am I missing ?
Because my secret values are stored in secret manager (and retrieved at build time) I suspected my secret to have some glitches (spaces, \n, etc.) in them so I've add a script to test it:
gcloud compute backend-services update \
--project=<my_project_id> \
--global \
$(kubectl get ingress grafana -o json | jq -r '.metadata.annotations."ingress.kubernetes.io/backends"' | jq -r 'keys[0]') \
--iap=enabled,oauth2-client-id=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_client_id),oauth2-client-secret=$(gcloud --project="<my_project_id>" beta secrets versions access latest --secret=Monitoring_secret)
And now IAP is properly enabled with the correct OAuth Client, so my secrets are "clean"
By the way, I also tried to rename secret variables like this (from client_id):
* oauth_client_id
* oauth-client-id
* clientID (like in backend documentation )
I've also write the value in the backend like this:
kind: BackendConfig
metadata:
name: backend-config-iap
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: backend-iap-secret
clientID: <value>
clientSecret: <value>
But doesn't work either.
Erratum:
The fact that the IAP is destroyed when I deploy again (after I enable it in web UI) is part of my deployment script in this test (I made a kubectl delete before).
But nevertheless, I can't enable IAP only with my backend configuration.
As suggested I've filed a bug report: https://issuetracker.google.com/issues/153475658
Solution given by Totem
Change given yaml with this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.global-static-ip-name: monitoring
networking.gke.io/managed-certificates: monitoring-tls
name: grafana
[...]
---
apiVersion: v1
kind: Service
metadata:
name: grafana
annotations:
beta.cloud.google.com/backend-config: '{"default": "backend-config-iap"}'
[...]
The backend is associated with the service and not the Ingress...
Now it Works !
You did everything right, just a one small change:
The annotation should be added on the Service resource
apiVersion: v1
kind: Service
metadata:
annotations:
beta.cloud.google.com/backend-config: '{"ports": { "443":"backend-config-iap"}}'
name: grafana
Usually you need to associate it with a port so ive added this example above, but make sure it works with 443 as expected.
this is based on internal example im using:
beta.cloud.google.com/backend-config: '{"ports": { "3000":"be-cfg}}'
I am trying to enable the rate-limit for my istio enabled service. But it doesn't work. How do I debug if my configuration is correct?
apiVersion: config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 5
validDuration: 1s
overrides:
- dimensions:
engine: myEngineValue
maxAmount: 5
validDuration: 1s
---
apiVersion: config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system
spec:
dimensions:
source: request.headers["x-forwarded-for"] | "unknown"
destination: destination.labels["app"] | destination.service | "unknown"
destinationVersion: destination.labels["version"] | "unknown"
engine: destination.labels["engine"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: requestcount
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
# - service: '*' ; I tried with this as well
- name: my-service
namespace: default
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota
I tried with - service: '*' as well in the QuotaSpecBinding; but no luck.
How, do I confirm if my configuration was correct? the my-service is the kubernetes service for my deployment. (Does this have to be a VirtualService of istio for rate limits to work? Edit: Yes, it has to!)
I followed this doc except the VirtualService part.
I have a feeling somewhere in the namespaces I am doing a mistake.
You have to define the virtual service for the service my-service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
spec:
hosts:
- myservice
http:
- route:
- destination:
host: myservice
This way, you allow Istio to know which service are you host you are referring to.
In terms of debugging, I know that there is a project named Kiali that aims to leverage observability in Istio environments. I know that they have validations for some Istio and Kubernetes objects: Istio configuration browse.
I am trying to configure Basic Authentication on a Nginx example with Traefik as Ingress controller.
I just create the secret "mypasswd" on the Kubernetes secrets.
This is the Ingress I am using:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxingress
annotations:
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: traefik
ingress.kubernetes.io/auth-secret: mypasswd
spec:
rules:
- host: nginx.mycompany.com
http:
paths:
- path: /
backend:
serviceName: nginxservice
servicePort: 80
I check in the Traefik dashboard and it appear, if I access to nginx.mycompany.com I can check the Nginx webpage, but without the basic authentication.
This is my nginx deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Nginx service:
apiVersion: v1
kind: Service
metadata:
labels:
name: nginxservice
name: nginxservice
spec:
ports:
# The port that this service should serve on.
- port: 80
# Label keys and values that must match in order to receive traffic for this service.
selector:
app: nginx
type: ClusterIP
It is popular to use basic authentication. In reference to Kubernetes documentation, you should be able to protect access to Traefik using the following steps :
Create authentication file using htpasswd tool. You'll be asked for a password for the user:
htpasswd -c ./auth
Now use kubectl to create a secret in the monitoring namespace using the file created by htpasswd.
kubectl create secret generic mysecret --from-file auth
--namespace=monitoring
Enable basic authentication by attaching annotations to Ingress object:
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
So, full example config of basic authentication can looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-dashboard
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: "basic"
ingress.kubernetes.io/auth-secret: "mysecret"
spec:
rules:
- host: dashboard.prometheus.example.com
http:
paths:
- backend:
serviceName: prometheus
servicePort: 9090
You can apply the example as following:
kubectl create -f prometheus-ingress.yaml -n monitoring
This should work without any issues.
Basic Auth configuration for Kubernetes and Traefik 2 seems to have slightly changed. It took me some time to find the solution, that's why I want to share it. I use k3s btw.
Step 1 + 2 are identical to what #d0bry wrote, create the secret:
printf "my-username:`openssl passwd -apr1`\n" >> my-auth
kubectl create secret generic my-auth --from-file my-auth --namespace my-namespace
Step 3 is to create the ingress object and apply a middleware that will handle the authentication
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: my-namespace-my-auth-middleware#kubernetescrd
spec:
rules:
- host: my.domain.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
And then of course apply the configuration
kubectl apply -f my-ingress.yaml
refs:
https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/
https://doc.traefik.io/traefik/middlewares/http/basicauth/
With the latest traefik (verified with 2.7) it got even simpler. Just create a secret of type kubernetes.io/basic-auth and use that in your middleware. No need to create the username:password string first and create a secret from that.
apiVersion: v1
kind: Secret
metadata:
name: my-auth
namespace: my-namespace
type: kubernetes.io/basic-auth
data:
username: <username in base64>
password: <password in base64>
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: my-auth-middleware
namespace: my-namespace
spec:
basicAuth:
removeHeader: true
secret: my-auth
Note that the password is not hashed as it is with htpasswd, but only base64 encoded.
Ref docs