Couldn't access my Kubernetes service via a traefik reverse proxy - kubernetes

I deploy a kubernetes cluster (1.8.8) in an cloud openstack pf (1 master with public ip adress/ 3 nodes). I want to use traefik (last version 1.6.1) as a reverse proxy for accessing my services.
Traefik was well deployed as a daemonset and I can access his GUI on port 8081. My prometheus ingress appears correctly in the traefik interface but I can't access my prometheus server UI.
Could you tell me what I am doing wrong ? Did I miss something ?
Thanks
Ingress of my prometheus:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-svc
servicePort: prom
My daemonset is below:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
hostNetwork: true # workaround
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: traefik:v1.6.1
name: traefik-ingress-lb
imagePullPolicy: Always
volumeMounts:
- mountPath: "/config"
name: "config"
resources:
requests:
cpu: 100m
memory: 20Mi
args:
- --kubernetes
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-conf
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: traefik
data:
traefik.toml: |-
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[web]
address = ":8081"

Related

Kubernetes cert-manager no certificate found on AWS ALB ingress

It's been a while and I cant get it to work. Basically I have a K8s Cluster on AWS EKS, ExternalDNS is set and works and now I'm trying to add TLS/SSL certificates with cert-manager.
Those are my configs:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-cluster-issuer
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: my-email
privateKeySecretRef:
name: letsencrypt-cluster-issuer-key
solvers:
- selector:
dnsZones:
- "example.it"
- "*.example.it"
dns01:
route53:
region: eu-central-1
hostedZoneID: HOSTEDZONEID
accessKeyID: ACCESSKEYID
secretAccessKeySecretRef:
name: route53-secret
key: secretkey
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: le-crt
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-cluster-issuer
commonName: "*.example.it"
dnsNames:
- "*.example.it"
ExternalDNS:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "pods", "nodes"]
verbs: ["get", "watch", "list"]
- apiGroups: ["extensions", "networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
labels:
app.kubernetes.io/name: external-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: externaldns # change to desired namespace: externaldns, kube-addons
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
labels:
app.kubernetes.io/name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app.kubernetes.io/name: external-dns
template:
metadata:
labels:
app.kubernetes.io/name: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.11.0
args:
- --source=service
- --source=ingress
- --domain-filter=example.it # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=external-dns
env:
- name: AWS_DEFAULT_REGION
value: eu-central-1 # change to region where EKS is installed
Cert-manager is deployed in the cert-manager namespace, while ExternalDNS is in its externaldns namespace. AWS ALB is in kube-system.
Finally, my ingress deployed in default ns:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: master
namespace: default
labels:
name: master
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/group.name: "alta"
alb.ingress.kubernetes.io/group.order: "0"
alb.ingress.kubernetes.io/ssl-redirect: "443"
cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer
spec:
ingressClassName: alb
tls:
- hosts:
- "example.it"
secretName: "tls-secret"
rules:
- host: example.it
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: echoserver # random test service, returns some page w/some info
port:
number: 80
With all this config, i still get "no certificate found for host: example.it" in my ingress. Certificate is being issued and all looks ok. Do you have an idea? I'm going insane over this.
Posting this in case someone encounters the same problem.
Basically AWS ALB does not support cert-manager, you have to go to AWS ACM, get yourself a certificate there and then add it through the certificate-arn annotation on your ingress. Then everything should start working. Thx reddit for this.

Unable to find Prometheus custom app exporter as a target in Prometheus server deployed in Kubernetes

I created a custom exporter in Python using prometheus-client package. Then created necessary artifacts to find the metric as a target in Prometheus deployed on Kubernetes.
But I am unable to see the metric as a target despite following all available instructions.
Help in finding the problem is appreciated.
Here is the summary of what I did.
Installed Prometheus using Helm on the K8s cluster in a namespace prometheus
Created a python program with prometheus-client package to create a metric
Created and deployed an image of the exporter in dockerhub
Created a deployment against the metrics image, in a namespace prom-test
Created a Service, ServiceMonitor, and a ServiceMonitorSelector
Created a service account, role and binding to enable access to the end point
Following is the code.
Service & Deployment
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: ClusterIP
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
#serviceAccount: test-app-exporter-sa
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Service account and role binding
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app-exporter-sa
namespace: prom-test
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-binding
subjects:
- kind: ServiceAccount
name: test-app-exporter-sa
namespace: prom-test
roleRef:
kind: ClusterRole
name: test-app-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
Service Monitor & Selector
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: prometheus
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- prom-test
#any: true
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: prometheus
spec:
serviceAccountName: test-app-exporter-sa
serviceMonitorSelector:
matchLabels:
app: test-app-exporter-sm
release: prometheus
resources:
requests:
memory: 400Mi
I am able to get the target identified by Prometheus.
But though the end point can be reached within the cluster as well as from the node IP. Prometheus says the target is down.
In addition to that I am unable to see any other target.
Prom-UI
Any help is greatly appreciated
Following is my changed code
Deployment & Service
apiVersion: v1
kind: Namespace
metadata:
name: prom-test
---
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: NodePort
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Cluster Roles
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: prom-test
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: prom-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
Service Monitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: monitoring
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- prom-test
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: monitoring
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
serviceMonitorSelector:
matchLabels:
app: test-app-exporter
release: prometheus
resources:
requests:
memory: 400Mi

Traefik in microk8s allways 404 trought HTTPS

I deployed a microk8s single node cluster on a simple & small VPS.
At the moment I running without cert SSL (Traefik cert by default).
The http:80 version of ingress is working correctly, I can browse the webpages at the correct ingress from HTTP, but when I try to run in https, Traefik is showing a 404.
I appreciate it if anyone can help me.
Many thanks
This is my Traefik config & my ingress config.
Traefik:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: middlewares.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: Middleware
plural: middlewares
singular: middleware
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressrouteudps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteUDP
plural: ingressrouteudps
singular: ingressrouteudp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsstores.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSStore
plural: tlsstores
singular: tlsstore
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: traefikservices.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TraefikService
plural: traefikservices
singular: traefikservice
scope: Namespaced
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
selector:
matchLabels:
name: traefik
template:
metadata:
labels:
name: traefik
spec:
terminationGracePeriodSeconds: 60
# hostPort doesn't work with CNI, so we have to use hostNetwork instead
# see https://github.com/kubernetes/kubernetes/issues/23920
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.2
args:
- --ping
- --ping.entrypoint=http
- --api.insecure
- --accesslog
- --entrypoints.web.Address=:80
- --entrypoints.websecure.Address=:443
#- --providers.kubernetescrd
- --providers.kubernetesingress
- forwardedHeaders.trustedIPs:["Public IP VPS running microk8s"]
#- --certificatesresolvers.default.acme.tlschallenge
#- --certificatesresolvers.default.acme.email=foo#you.com
#- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
#- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: web
containerPort: 80
- name: websecure
containerPort: 443
- name: admin
containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 443
selector:
app: traefik
Ingress:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: front
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/redirect-permanent: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/ssl-temporary-redirect: "false"
ingress.kubernetes.io/ssl-proxy-headers: "X-Forwarded-Proto: https"
spec:
rules:
- host: front-dev.mgucommunity.com
http:
paths:
- path: /
backend:
serviceName: front
servicePort: 80
Looks like you are missing 👀 the entrypoint websecure annotation so that Traefik also works on port 443
traefik.ingress.kubernetes.io/router.entrypoints: web, websecure
Note that if you want to redirect all your traffic to HTTPS you would have to have this in your DaemonSet config:
...
- --entrypoints.web.http.redirections.entryPoint.to=websecure
- --entrypoints.websecure.http.tls.certResolver=default
....
This might help a write up on how to use a K8s ingress with Traefik v2.
✌️

How to remove nodePort Kubernetes - Traefik ingress controller

Following this tutorial:
https://medium.com/kubernetes-tutorials/deploying-traefik-as-ingress-controller-for-your-kubernetes-cluster-b03a0672ae0c
I am able to access the site by visiting www.domain.com:nodePort
Is it possible to omit nodePort part? Could you provide example?
Is it possible to omit nodePort part?
Yes and No.
Not directly. Kubernetes always exposes external services even LoadBalancer type of services on a node port.
Yes. If you front it with a load balancer. Either your own that forwards port 80 and/or 443 to your NodePort or a LoadBalancer type of service which essentially sets up an external load balancer that forwards traffic to your NodePort.
Could you provide an example?
The NodePort service to expose your ingress is basically the same, you just need to setup your own external load balancer. (i.e AWS ELB/ALB/NLB, GCP load balancer, Azure load balancer, F5, etc, etc)
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
The LoadBalancer type is just a change on the type of service:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
In the case above, Kubernetes will automatically manage the load balancer in the provider.
Try deploying the below code. This is a simple whoami pod which can be deployed along traefik and can be accessed at http://localhost/whoami-app-api when deployed on the local machine. The dashboard is also available at http://localhost:8080/dashboard.
Deployment File:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: middlewares.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: Middleware
plural: middlewares
singular: middleware
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressrouteudps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteUDP
plural: ingressrouteudps
singular: ingressrouteudp
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsstores.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSStore
plural: tlsstores
singular: tlsstore
scope: Namespaced
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: traefikservices.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TraefikService
plural: traefikservices
singular: traefikservice
scope: Namespaced
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: default
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.1
args:
- --accesslog=true
- --api
- --api.insecure
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --providers.kubernetescrd
- --configfile=/config/traefik.toml
ports:
- name: web
containerPort: 80
- name: admin
containerPort: 8080
- name: websecure
containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
ports:
- protocol: TCP
port: 80
name: web
targetPort: 80
- protocol: TCP
port: 443
name: websecure
targetPort: 80
- protocol: TCP
port: 8080
name: admin
targetPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 1
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: containous/whoami
ports:
- name: web
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: whoami-whoami
namespace: default
spec:
entryPoints:
- web
- websecure
routes:
- match: PathPrefix(`/whoami-app-api`)
kind: Rule
services:
- name: whoami
port: 80

K8 / Cert-Manager / Errors and Possible YAML misconfiguration

I am attempting to get cert-manager working with Let's Encrypt and I am running into some interesting errors, such as cert-manager not being able to access resources. I am also seeing two certificates and two certificaterequests when I would only expect one. I've attached some pictures of logging and output from the certs and cert requests. I've tried quite a few adjustments but I seem to be spinning my wheels. Any help is greatly appreciated!
---
kind: Secret
apiVersion: v1
metadata:
name: coreyperkinsdev-production-clusterissuer-acme
namespace: default
data:
coreyperkinsdevacmedns.json: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEx3U3hZcVRzTWpuejZrVytUWVl5cDh1cFRaZkFWc1U1bGQvZzE4aXJ6T1pwaEZyCkRwdnJ5TkhLam9CMWJ2TjNtdDM0VkxWK2JQM3NHZEJCSW83MEVoTzl3VS83TVRDZVJScGtISmlCYzc3dUJRUGgKTjVaalZUM0lEOEIyUzZTOWFONWVMNUhaWHYrbExhd0pjUEdVYUtlcVpmVG1XTVVHOFFHTTJtK1pJWGNNY1dyUwpRZ1hSSnM5anUzQndsWGZGR1hOeGJ6SnlyL3ZKZkwrVkU4Y09YN1h0WlBtQ3BjWEd2TVUzQ3BuaTJJYTJiUmxJClFvb0w3N2I4SWJDTm12VVM3Y3U3c2J3cTRVbUJUaWRUYVU3UFRtQjJuMXZRTm9uZjllRVRkSDkwYkpURFFadDQKRURaa0kzeU1QbXl5NU5tV3poOEViVGFSQ2VjU05PM2ZkSHNQQndJREFRQUJBb0lCQUZ6T2VzRXZjTG1GNE90VQpnN1NDMytZa0tYcXkyY1JEZGc3MS9VZURjYmNQd3lWd3FrMjlLUzFjOVN1SnNVUEJCN2pocEtReThuR2JUa0xQCkdyTlpQdUdOZTVJcHRPNUViZkZFSlFZK0ZiTk81c0J5aHBMWnliWTYzR0dpVGR4NEFyODMrSnRxeEhrd3d5d1oKdkZ0ZjRmcC9wbE5tbDJZYi9uMkJjMGV6cUtseXNHbzZDL3IxdmRZZ0RqTEtzT1I0MTF4WS8rN0xuS2JBQWpVcApoQTRseG5LaFpjMzZONmFjQVowdGZPL1FzcXpqNWRsQTduMlgvQ1lSRkxXdjZML09PdHZlWlZCeGtqT0xQL3hxClhKZXJjSE0vMmN2YVZtbzVTM2ltRlNWNkxKeEF4UW10T1E4NUdTVHRXMWVIQUsrQldXL000eHdUM0NPcUx1aDIKSXdQcUlha0NnWUVBMWRxRkc5N3NQZE80bUFPdUcyWUJ5VkpnWXhrYzBvK0trTm1ITXc4YkJDanpZdXJDaTVFUAptZkFJV25DYUxQaSsrSE9FbXF3M0RVY1lMOFVWcHJkMkpBVG96VVFWcXBJbHUzQ3RzenVicGF6Rk9pUVBRVHZwCnp1dmNEMFpHeStsNE9OK2lYdllQQVBzbGQreHYzcFQxOWRsWFBVRGJYUHNvRXFsZ0V3bDNsN1VDZ1lFQTJGcWQKbExJSVlwWlVIUjFJbTI5c1J6Y2RyRjNjd21pd1lnVEVmT09QVVpwY05TVnkvc1hmOFkveEFkWHo2bkVsaGhXMApxUGtzbHJjdm9XM29DcXF4dElwQ3JCcUxWTjg4azFHZjNjUFRJR1FkeDh5ajVKdDJtbFRpU3Eyc3kxMUpUa3FPCmVzdk0wS1ZJT0lQMm11dXVaeFNpNEgxakE1Nlp2QXI2cnFMVFNVc0NnWUEwaVM1U0huMmk0clJpZytUdHppMTYKSzhhS0VjMUczUVNKZVNjQm9DQmU4VUI1ZUhxNmxyUmllTmxVZm4waHR5b1RGeTNvWVk1VXNMWjhaY3BmM29vagpaeUZaNi9QMnAxaWxwNVRFaDB4QmN5UXdtRk0zRDJUczlIeG5ORGlJTjU3Vk9mdEZvT1VtdEl3TDNnWE5oSUs0CkZ1Q2JwNmM4UEdjbnpueFBzTyswVVFLQmdISE0rQ1pHbnZKOGNESUFQVGpGR3djNmpua2p4Z0xjWGlxd3AwbXAKeUxEN3FKU3I1aGpzckNhN3QrRm5VSzE0Wm14bzdtWVM2c2s4QWVtL2pkWk9ncnFjSHdXMzBLSUw2aWp6UGt1Lwp2VVhFWTRXOHRsaUJEWm1RSEpkN1V2Q0ZXUkc5VmNSeGZvSWc3aVFNQmFMblpRMERaY2ovS3gyMFJ0a0tUV0dlCmM5U1JBb0dCQUpZY1hpU3RRNkVyM0xqeit2TTUzN05NdENIVFdBZ05yZzM0ZWdYMzJyZVZZdkd5amhKYk1yaEwKSkxxT0U5NmE3OUJZbDRyZERoNUliL1psUmhOUUpleENrRVFLUm12QmkwVHFUZ21EZmhxa3plOVRKTllRbndBRApHMU5xL0VaN2RCQ3gvcmlhbTRxUktxc0FwWDhWcjc3NGFKSCsvcFBuZ0xucHI3emFib0hyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdEx3U3hZcVRzTWpuejZrVytUWVl5cDh1cFRaZkFWc1U1bGQvZzE4aXJ6T1pwaEZyCkRwdnJ5TkhLam9CMWJ2TjNtdDM0VkxWK2JQM3NHZEJCSW83MEVoTzl3VS83TVRDZVJScGtISmlCYzc3dUJRUGgKTjVaalZUM0lEOEIyUzZTOWFONWVMNUhaWHYrbExhd0pjUEdVYUtlcVpmVG1XTVVHOFFHTTJtK1pJWGNNY1dyUwpRZ1hSSnM5anUzQndsWGZGR1hOeGJ6SnlyL3ZKZkwrVkU4Y09YN1h0WlBtQ3BjWEd2TVUzQ3BuaTJJYTJiUmxJClFvb0w3N2I4SWJDTm12VVM3Y3U3c2J3cTRVbUJUaWRUYVU3UFRtQjJuMXZRTm9uZjllRVRkSDkwYkpURFFadDQKRURaa0kzeU1QbXl5NU5tV3poOEViVGFSQ2VjU05PM2ZkSHNQQndJREFRQUJBb0lCQUZ6T2VzRXZjTG1GNE90VQpnN1NDMytZa0tYcXkyY1JEZGc3MS9VZURjYmNQd3lWd3FrMjlLUzFjOVN1SnNVUEJCN2pocEtReThuR2JUa0xQCkdyTlpQdUdOZTVJcHRPNUViZkZFSlFZK0ZiTk81c0J5aHBMWnliWTYzR0dpVGR4NEFyODMrSnRxeEhrd3d5d1oKdkZ0ZjRmcC9wbE5tbDJZYi9uMkJjMGV6cUtseXNHbzZDL3IxdmRZZ0RqTEtzT1I0MTF4WS8rN0xuS2JBQWpVcApoQTRseG5LaFpjMzZONmFjQVowdGZPL1FzcXpqNWRsQTduMlgvQ1lSRkxXdjZML09PdHZlWlZCeGtqT0xQL3hxClhKZXJjSE0vMmN2YVZtbzVTM2ltRlNWNkxKeEF4UW10T1E4NUdTVHRXMWVIQUsrQldXL000eHdUM0NPcUx1aDIKSXdQcUlha0NnWUVBMWRxRkc5N3NQZE80bUFPdUcyWUJ5VkpnWXhrYzBvK0trTm1ITXc4YkJDanpZdXJDaTVFUAptZkFJV25DYUxQaSsrSE9FbXF3M0RVY1lMOFVWcHJkMkpBVG96VVFWcXBJbHUzQ3RzenVicGF6Rk9pUVBRVHZwCnp1dmNEMFpHeStsNE9OK2lYdllQQVBzbGQreHYzcFQxOWRsWFBVRGJYUHNvRXFsZ0V3bDNsN1VDZ1lFQTJGcWQKbExJSVlwWlVIUjFJbTI5c1J6Y2RyRjNjd21pd1lnVEVmT09QVVpwY05TVnkvc1hmOFkveEFkWHo2bkVsaGhXMApxUGtzbHJjdm9XM29DcXF4dElwQ3JCcUxWTjg4azFHZjNjUFRJR1FkeDh5ajVKdDJtbFRpU3Eyc3kxMUpUa3FPCmVzdk0wS1ZJT0lQMm11dXVaeFNpNEgxakE1Nlp2QXI2cnFMVFNVc0NnWUEwaVM1U0huMmk0clJpZytUdHppMTYKSzhhS0VjMUczUVNKZVNjQm9DQmU4VUI1ZUhxNmxyUmllTmxVZm4waHR5b1RGeTNvWVk1VXNMWjhaY3BmM29vagpaeUZaNi9QMnAxaWxwNVRFaDB4QmN5UXdtRk0zRDJUczlIeG5ORGlJTjU3Vk9mdEZvT1VtdEl3TDNnWE5oSUs0CkZ1Q2JwNmM4UEdjbnpueFBzTyswVVFLQmdISE0rQ1pHbnZKOGNESUFQVGpGR3djNmpua2p4Z0xjWGlxd3AwbXAKeUxEN3FKU3I1aGpzckNhN3QrRm5VSzE0Wm14bzdtWVM2c2s4QWVtL2pkWk9ncnFjSHdXMzBLSUw2aWp6UGt1Lwp2VVhFWTRXOHRsaUJEWm1RSEpkN1V2Q0ZXUkc5VmNSeGZvSWc3aVFNQmFMblpRMERaY2ovS3gyMFJ0a0tUV0dlCmM5U1JBb0dCQUpZY1hpU3RRNkVyM0xqeit2TTUzN05NdENIVFdBZ05yZzM0ZWdYMzJyZVZZdkd5amhKYk1yaEwKSkxxT0U5NmE3OUJZbDRyZERoNUliL1psUmhOUUpleENrRVFLUm12QmkwVHFUZ21EZmhxa3plOVRKTllRbndBRApHMU5xL0VaN2RCQ3gvcmlhbTRxUktxc0FwWDhWcjc3NGFKSCsvcFBuZ0xucHI3emFib0hyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coreyperkins-deployment
labels:
version: v1
spec:
replicas: 3
selector:
matchLabels:
app: coreyperkins-frontend
version: v1
template:
metadata:
labels:
app: coreyperkins-frontend
version: v1
spec:
containers:
- name: coreyperkins-frontend
image: coreyperkinsdev.azurecr.io/www:52
resources:
requests:
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: coreyperkinsdev-acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: coreyperkins-service
labels:
app: coreyperkins-frontend
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 80
name: http
selector:
app: coreyperkins-frontend
---
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: coreyperkinsdev-production-clusterissuer
namespace: cert-manager
spec:
acme:
email: corey.perkins#gmail.com
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: coreyperkinsdev-production-clusterissuer-acme
solvers:
- dns01:
acmedns:
host: https://acme-staging-v02.api.letsencrypt.org/directory
accountSecretRef:
name: coreyperkinsdev-production-clusterissuer-acme
key: coreyperkinsdevacmedns.json
---
apiVersion: cert-manager.io/v1alpha2
kind: Certificate
metadata:
name: coreyperkinsdev-production-certificate
namespace: cert-manager
spec:
secretName: coreyperkinsdev-production-clusterissuer-acme
issuerRef:
name: coreyperkinsdev-production-clusterissuer
kind: ClusterIssuer
commonName: coreyperkins.dev
dnsNames:
- coreyperkins.dev
- '*.coreyperkins.dev'
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: coreyperkinsdev-ingress
namespace: default
annotations:
cert-manager.io/cluster-issuer: coreyperkinsdev-production-clusterissuer
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
tls:
- hosts:
- coreyperkins.dev
- '*.coreyperkins.dev'
secretName: coreyperkinsdev-production-clusterissuer-acme
rules:
- host: www.coreyperkins.dev
- http:
paths:
- path: /?(.*)
backend:
serviceName: coreyperkins-service
servicePort: 5000
---
kind: ConfigMap
apiVersion: v1
metadata:
name: coreyperkinsdev-ng-cm
data:
http-snippet: |
types {
module;
}
---