Kubernetes cert-manager no certificate found on AWS ALB ingress - kubernetes

It's been a while and I cant get it to work. Basically I have a K8s Cluster on AWS EKS, ExternalDNS is set and works and now I'm trying to add TLS/SSL certificates with cert-manager.
Those are my configs:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-cluster-issuer
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: my-email
privateKeySecretRef:
name: letsencrypt-cluster-issuer-key
solvers:
- selector:
dnsZones:
- "example.it"
- "*.example.it"
dns01:
route53:
region: eu-central-1
hostedZoneID: HOSTEDZONEID
accessKeyID: ACCESSKEYID
secretAccessKeySecretRef:
name: route53-secret
key: secretkey
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-cluster-issuer
commonName: "*.example.it"
dnsNames:
- "*.example.it"
ExternalDNS:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "pods", "nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
labels:
app.kubernetes.io/name: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: externaldns # change to desired namespace: externaldns, kube-addons
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: external-dns
template:
metadata:
labels:
app.kubernetes.io/name: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.11.0
args:
- --source=service
- --source=ingress
- --domain-filter=example.it # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=external-dns
env:
- name: AWS_DEFAULT_REGION
value: eu-central-1 # change to region where EKS is installed
Cert-manager is deployed in the cert-manager namespace, while ExternalDNS is in its externaldns namespace. AWS ALB is in kube-system.
Finally, my ingress deployed in default ns:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: master
namespace: default
labels:
name: master
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/group.name: "alta"
alb.ingress.kubernetes.io/group.order: "0"
alb.ingress.kubernetes.io/ssl-redirect: "443"
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
spec:
ingressClassName: alb
tls:
- hosts:
- "example.it"
secretName: "tls-secret"
rules:
- host: example.it
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: echoserver # random test service, returns some page w/some info
port:
number: 80
With all this config, i still get "no certificate found for host: example.it" in my ingress. Certificate is being issued and all looks ok. Do you have an idea? I'm going insane over this.

Posting this in case someone encounters the same problem.
Basically AWS ALB does not support cert-manager, you have to go to AWS ACM, get yourself a certificate there and then add it through the certificate-arn annotation on your ingress. Then everything should start working. Thx reddit for this.

Related

Microk8s/Kubernetes does not use the Let's Encrypt auto-generated certificate

Having the following k8s config:
---
kind: Namespace
apiVersion: v1
metadata:
name: test
labels:
name: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: test-depl
spec:
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: jfsanchez91/http-test-server
---
apiVersion: v1
kind: Service
metadata:
namespace: test
name: test-svc
spec:
selector:
app: test-app
ports:
- name: test-app
protocol: TCP
port: 80
targetPort: 8090
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
namespace: test
name: letsencrypt-cert-issuer-test-staging
spec:
acme:
email: email#example.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-cert-issuer-test-staging
solvers:
- http01:
ingress:
class: public
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
namespace: test
name: letsencrypt-cert-issuer-test-prod
spec:
acme:
email: email#example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-cert-issuer-test-prod
solvers:
- http01:
ingress:
class: public
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: ingress-routes
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-cert-issuer-test-prod"
spec:
tls:
- hosts:
- test.example.com
secretName: tls-secret
rules:
- host: test.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-svc
port:
number: 80
The Let's Encrypt certificate is being issued and stored in tls-secret correctly.
But then when I try to open test.example.com I get an invalid certificate (the K8s default certificate) NET::ERR_CERT_AUTHORITY_INVALID.
Common Name (CN): Kubernetes Ingress Controller Fake Certificate
Organization (O): Acme Co
Q: How can I configure Ingress correctly to use the Let's Encrypt certificate?
Q: Is there anything else I should configure?
UPDATE: tls-secret type (kubernetes.io/tls):
$ kubectl -n test describe secrets tls-secret
Name: tls-secret
Namespace: test
Labels: <none>
Annotations: cert-manager.io/alt-names: test.example.com
cert-manager.io/certificate-name: tls-secret
cert-manager.io/common-name: test.example.com
cert-manager.io/ip-sans:
cert-manager.io/issuer-group: cert-manager.io
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: letsencrypt-cert-issuer-test-prod
cert-manager.io/uri-sans:
Type: kubernetes.io/tls
Data
====
tls.key: 1679 bytes
tls.crt: 5599 bytes
I'd recommand setting the certificate your self in order to have more control on subdomains to include and renewal policy
kubectl -n $NAMESPACE apply -f certificate.yaml
For example, for a DNS hosted on Azure DNS zone
#certificate.yaml
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: cert-wildcard
spec:
duration: 2160h # 90d
renewBefore: 360h # 15d
secretName: cert-wildcard
issuerRef: #from issuer.yaml
name: letsencrypt-prod
kind: ClusterIssuer
commonName: domain.com # go to domaine, go to certificate, go to Details, go to Common Name
dnsNames: #list of all different domains associeted with the certificate
- domain.com
- sub.domain.com
acme:
config:
- dns01:
provider: azure-dns
domains:
- domain.com
- sub.domain.com

Unable to find Prometheus custom app exporter as a target in Prometheus server deployed in Kubernetes

I created a custom exporter in Python using prometheus-client package. Then created necessary artifacts to find the metric as a target in Prometheus deployed on Kubernetes.
But I am unable to see the metric as a target despite following all available instructions.
Help in finding the problem is appreciated.
Here is the summary of what I did.
Installed Prometheus using Helm on the K8s cluster in a namespace prometheus
Created a python program with prometheus-client package to create a metric
Created and deployed an image of the exporter in dockerhub
Created a deployment against the metrics image, in a namespace prom-test
Created a Service, ServiceMonitor, and a ServiceMonitorSelector
Created a service account, role and binding to enable access to the end point
Following is the code.
Service & Deployment
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: ClusterIP
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
#serviceAccount: test-app-exporter-sa
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Service account and role binding
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app-exporter-sa
namespace: prom-test
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-binding
subjects:
- kind: ServiceAccount
name: test-app-exporter-sa
namespace: prom-test
roleRef:
kind: ClusterRole
name: test-app-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
Service Monitor & Selector
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: prometheus
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- prom-test
#any: true
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: prometheus
spec:
serviceAccountName: test-app-exporter-sa
serviceMonitorSelector:
matchLabels:
app: test-app-exporter-sm
release: prometheus
resources:
requests:
memory: 400Mi
I am able to get the target identified by Prometheus.
But though the end point can be reached within the cluster as well as from the node IP. Prometheus says the target is down.
In addition to that I am unable to see any other target.
Prom-UI
Any help is greatly appreciated
Following is my changed code
Deployment & Service
apiVersion: v1
kind: Namespace
metadata:
name: prom-test
---
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: NodePort
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Cluster Roles
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: prom-test
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: prom-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
Service Monitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: monitoring
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- prom-test
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: monitoring
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
serviceMonitorSelector:
matchLabels:
app: test-app-exporter
release: prometheus
resources:
requests:
memory: 400Mi

Multiple ingress controller is not working

I'm creating multiple ingress controller in different namespaces. Initially, it's creating a load balancer in AWS and attached pod IP addresses to target groups. After some days it is not updating the new pod IP to the target group. I've attached the ingress controller logs here.
E0712 15:02:30.516295 1 leaderelection.go:270] error retrieving resource lock namespace1/ingress-controller-leader-alb: configmaps "ingress-controller-le │
│ ader-alb" is forbidden: User "system:serviceaccount:namespace1:fc-serviceaccount-icalb" cannot get resource "configmaps" in API group "" in the namespace "namespace1"
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "fc-ingress"
annotations:
kubernetes.io/ingress.class: alb-namespace1
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets:
alb.ingress.kubernetes.io/certificate-arn:
alb.ingress.kubernetes.io/ssl-policy:
alb.ingress.kubernetes.io/security-groups:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '2'
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '5'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=false
alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=false
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=0
alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=300
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=false
labels:
app: fc-label-app-ingress
spec:
rules:
- host: "hostname1.com"
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: "hostname2.com"
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- host: "hostname3.com"
http:
paths:
- backend:
serviceName: service3
servicePort: 80
ingress_controller.yaml
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fc-label-app-icalb
name: fc-ingress-controller-alb
namespace: namespace1
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
spec:
replicas: 1
selector:
matchLabels:
app: fc-label-app-icalb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: fc-label-app-icalb
spec:
containers:
- args:
# Limit the namespace where this ALB Ingress Controller deployment will
# resolve ingress resources. If left commented, all namespaces are used.
- --watch-namespace=namespace1
# Setting the ingress-class flag below ensures that only ingress resources with the
# annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may
# choose any class you'd like for this controller to respect.
- --ingress-class=alb-namespace1
# Name of your cluster. Used when naming resources created
# by the ALB Ingress Controller, providing distinction between
# clusters.
- --cluster-name=$EKS_CLUSTER_NAME
# AWS VPC ID this ingress controller will use to create AWS resources.
# If unspecified, it will be discovered from ec2metadata.
# - --aws-vpc-id=vpc-xxxxxx
# AWS region this ingress controller will operate in.
# If unspecified, it will be discovered from ec2metadata.
# List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region
# - --aws-region=us-west-1
# Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# - ---aws-api-debug
# Maximum number of times to retry the aws calls.
# defaults to 10.
# - --aws-max-retries=10
env:
# AWS key id for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_ACCESS_KEY_ID
# value: KEYVALUE
# AWS key secret for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_SECRET_ACCESS_KEY
# value: SECRETVALUE
# Repository location of the ALB Ingress Controller.
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
imagePullPolicy: Always
name: server
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: fc-serviceaccount-icalb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: fc-label-app-icalb
name: fc-clusterrole-icalb
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: fc-label-app-icalb
name: fc-clusterrolebinding-icalb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fc-clusterrole-icalb
subjects:
- kind: ServiceAccount
name: fc-serviceaccount-icalb
namespace: namespace1
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: fc-label-app-icalb
name: fc-serviceaccount-icalb
namespace: namespace1
I have had an issue like that on AKS. I have two Nginx Ingress Controllers:
external-nginx-ingress
internal-nginx-ingress
Only one worked at a time, Internal or external.
After specifying a unique election-id for each one the problem was fixed.
I use the following HELM chart:
Repository = "https://kubernetes.github.io/ingress-nginx"
Chart = "ingress-nginx"
Chart_version = "4.1.3"
K8s Version = "1.22.4"
Deployment
kubectl get deploy -n ingress
NAME READY UP-TO-DATE AVAILABLE
external-nginx-ingress-controller 3/3 3 3
internal-nginx-ingress-controller 1/1 1 1
IngressClass
kubectl get ingressclass
NAME CONTROLLER PARAMETERS
external-nginx k8s.io/ingress-nginx <none>
internal-nginx k8s.io/internal-ingress-nginx <none>
Deployment for External
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-ingress-controller
namespace: ingress
annotations:
meta.helm.sh/release-name: external-nginx-ingress
meta.helm.sh/release-namespace: ingress
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external-nginx-ingress
app.kubernetes.io/name: ingress-nginx
template:
spec:
containers:
- name: ingress-nginx-external-controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.1
args:
- /nginx-ingress-controller
- >-
--publish-service=$(POD_NAMESPACE)/external-nginx-ingress-controller
- '--election-id=external-ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--ingress-class=external-nginx'
- '--ingress-class-by-name=true'
Deployment for Internal
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-nginx-ingress-controller
namespace: ingress
annotations:
meta.helm.sh/release-name: internal-nginx-ingress
meta.helm.sh/release-namespace: ingress
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: internal-nginx-ingress
app.kubernetes.io/name: ingress-nginx
template:
spec:
containers:
- name: ingress-nginx-internal-controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.1
args:
- /nginx-ingress-controller
- >-
--publish-service=$(POD_NAMESPACE)/internal-nginx-ingress-controller
- '--election-id=internal-ingress-controller-leader'
- '--controller-class=k8s.io/internal-ingress-nginx'
- '--ingress-class=internal-nginx'
- '--ingress-class-by-name=true'

Traefik-ingress dashboard return 404

I deploy traefik ingress controller pod and then two services, one of them a LoadBalancer type for reverse-proxy and the other a ClusterIP for dashboard.
Also I create ingress for redirect all <elb-address>/dashboard to my traefik dashboard.
but for some reason I get 404 error code when I trying to request my dashboard at aws-ip/dashboard
That is the manifest yamls that I use to set up traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
targetPort: 80
port: 80
type: LoadBalancer
---
kind: Service
apiVersion: v1
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web
Update
I am watching the log and get a the follow errors with rbac activated and the ClusterRole, ServiceRole and ServiceAccount created:
E1124 18:56:23.267560 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list endpoints in the namespace "default"
E1124 18:56:23.648207 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list services in the namespace "default"
E1124 18:56:23.267560 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list endpoints in the namespace "default"
This are my serviceAccount, clusterRole and RoleBingind
kind: ServiceAccount
apiVersion: v1
metadata:
name: traefik-ingress
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress
subjects:
- kind: ServiceAccount
name: traefik-ingress
namespace: default
Solution
I apply this
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
and then installed the stable/traefik template with helm
helm install stable/traefik --name=traefik-ingress-controller --values values.yaml
values.yaml file is:
dashboard:
enabled: true
domain: traefik-ui.k8s.io
rbac:
enabled: true
kubernetes:
namespaces:
- default
- kube-system
Thanks for help
I tried this myself. So basically when you create your Ingress it gets created with a host of traefik-ui.minikube (default), so you won't be able to access the dashboard with <elb-address>/dashboard/.
You will have to access it with traefik-ui.minikube/dashboard/. As an example:
$ kubectl -n kube-system get ingress
NAME HOSTS ADDRESS PORTS AGE
traefik-ingress * 80 8m13s
traefik-web-ui traefik-ui.minikube xxxx.elb.amazonaws.com 80 71d
$ curl -H 'Host: traefik-ui.minikube' xxxx.elb.amazonaws.com/dashboard/
<!doctype html><html class="has-navbar-fixed-top">
...
</html>
You can also add an entry to your /etc/hosts file if you'd like to see it on your browser.
<one-of-the-ips-of-your-elb> traefik-ui.minikube
And you can also use the host to the rules in your Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: yourown.hostname.com
http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web
Just because I ran into this, the docs say:
The trailing slash / in /dashboard/ is mandatory

Couldn't access my Kubernetes service via a traefik reverse proxy

I deploy a kubernetes cluster (1.8.8) in an cloud openstack pf (1 master with public ip adress/ 3 nodes). I want to use traefik (last version 1.6.1) as a reverse proxy for accessing my services.
Traefik was well deployed as a daemonset and I can access his GUI on port 8081. My prometheus ingress appears correctly in the traefik interface but I can't access my prometheus server UI.
Could you tell me what I am doing wrong ? Did I miss something ?
Thanks
Ingress of my prometheus:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-svc
servicePort: prom
My daemonset is below:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
hostNetwork: true # workaround
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: traefik:v1.6.1
name: traefik-ingress-lb
imagePullPolicy: Always
volumeMounts:
- mountPath: "/config"
name: "config"
resources:
requests:
cpu: 100m
memory: 20Mi
args:
- --kubernetes
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-conf
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: traefik
data:
traefik.toml: |-
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[web]
address = ":8081"