enable https on local domain with Kubernetes / Traefik Ingress - kubernetes

When I test my Spring boot app without docker, I test it with:
https://localhost:8081/points/12345/search
And it works great. I get an error if I use http
Now, I want to deploy it with Kubernetes in local, with url: https://sge-api.local
When I use http, I get the same error as when I don't use docker.
But when I use https, I get:
<html><body><h1>404 Not Found</h1></body></html>
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
selector:
matchLabels:
app: sge-api-local
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: sge-api-local
spec:
containers:
- image: sge_api:local
name: sge-api-local
Here is my ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: sge-ingress
namespace: sge
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: sge-api.local
http:
paths:
- backend:
serviceName: sge-api-local
servicePort: 8081
tls:
- secretName: sge-api-tls-cert
with :
kubectl -n kube-system create secret tls sge-api-tls-cert --key=../certs/privkey.pem --cert=../certs/cert1.pem
Finally, here is my service:
apiVersion: v1
kind: Service
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
ports:
- name: "8081"
port: 8081
selector:
app: sge-api-local
What should I do ?
EDIT:
traefik-config.yml:
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
traefik-deployment:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
traefik-service.yml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin

Please make sure that you have enable TLS. Let’s Encrypt is a free TLS Certificate Authority (CA) and you can use it to automatically request and renew Let’s Encrypt certificates for public domain names. Make sure that you have created configmap. Check if you follow every steps during traefik setup: traefik-ingress-controller.
Then you have to assign to which hosts creted secret have to be assigned, egg.
tls:
- secretName: sge-api-tls-cert
hosts:
- sge-api.local
Remember to add specific port assigned to host while executing link.
In your case should be: https://sge-api.local:8081
When using SSL offloading outside of cluster it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available.
You could also add annotations to ingress configuration file:
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
to enable Redirect to another entryPoint for that frontend (e.g. HTTPS).
Let me know if it helps.

Related

Expose rabbitmq managment via kubernetes ingrress

I have a kubernetes cluster with a deployment of rabbitmq. I want to expose the rabbitmanagment UI in that way I can access to it in my browser. To do that I have a deployment, service and ingress file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- image: rabbitmq:3.8.9-management
name: rabbitmq
ports:
- containerPort: 5672
- containerPort: 15672
resources: {}
restartPolicy: Always
The service:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
ports:
- name: "5672"
port: 5672
targetPort: 5672
- name: "15672"
port: 15672
targetPort: 15672
selector:
app: rabbitmq
Ingress file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /rabbitmq
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
When I type http://localhost/rabbitmq in my browser I get this nginx error: {"error":"Object Not Found","reason":"Not Found"}
But when I enter in some other pod and I type: curl http://rabbitmq:15672 It get the a response of the website.
Im new to kubernetes, I havent found any relevant solution to my problem, If someone could help me I would very grateful!!
Thanks for reading.
Try:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx # <-- assumed you only have 1 ingress-nginx
rules:
- http:
paths:
- path: /rabbitmq(/|$)(.*)
...
Request to http://localhost/rabbitmq will be seen by your rabbitmq service as /

Ingress creating health check on HTTP instead of TCP

I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.

How to deny default but allow HTTP and TCP traffic in istio kubernetes cluster?

I have a cluster with istio injection enabled and cockroach db stateful set defined:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb-serviceaccount
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: tcp
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb-statefulset
labels:
version: v20.1.2
spec:
serviceName: cockroachdb
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
version: v20.1.2
spec:
serviceAccountName: cockroachdb-serviceaccount
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v20.1.2
ports:
- containerPort: 26257
name: tcp
- containerPort: 8080
name: http
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
env:
- name: COCKROACH_CHANNEL
value: kubernetes-insecure
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- "exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-statefulset-0.cockroachdb,cockroachdb-statefulset-1.cockroachdb,cockroachdb-statefulset-2.cockroachdb --cache 25% --max-sql-memory 25%"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 4Gi
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: cockroachdb-public
spec:
host: cockroachdb-public
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cockroachdb-public
spec:
hosts:
- cockroachdb-public
http:
- match:
- port: 8080
route:
- destination:
host: cockroachdb-public
port:
number: 8080
tcp:
- match:
- port: 26257
route:
- destination:
host: cockroachdb-public
port:
number: 26257
and a service that accesses it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: downstream-serviceaccount
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v1
labels:
app: downstream
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v1
template:
metadata:
labels:
app: downstream
version: v1
spec:
serviceAccountName: downstream-serviceaccount
containers:
- name: downstream
image: downstream:0.1
ports:
- containerPort: 80
env:
- name: DATABASE_URL
value: postgres://roach#cockroachdb-public:26257/roach?sslmode=disable
---
apiVersion: v1
kind: Service
metadata:
name: downstream-service
labels:
app: downstream
spec:
type: ClusterIP
selector:
app: downstream
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: downstream-service
spec:
host: downstream-service
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-service
spec:
hosts:
- downstream-service
http:
- name: "downstream-service-routes"
match:
- port: 80
route:
- destination:
host: downstream-service
port:
number: 80
Now I'd like to restrict access to cockroach db only to downstream-service and to cockroachdb itself (since nodes need intercommunication between each other).
I'm trying to restrict the traffic with something like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all
namespace: default
spec:
{}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-downstream
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/downstream-serviceaccount"]
- to:
- operation:
ports: ["26257"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
- to:
- operation:
ports: ["26257"]
but doesn't seem to do anything. I can still e.g. access cockroachdb-public:8080 cluster HTTP UI from downstream-service.
Now when I add the following:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all-to-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: DENY
rules:
- to:
- operation:
ports: ["26257"]
then all the traffic is blocked (including the traffic between cockroachdb nodes).
What am I doing wrong here?
You are having the same problem that a guy couple of days ago. In your authorization policy you have two policies:
service account downstream-serviceaccount (and cockroachdb-serviceaccount for the other authorization policy) from default namespace can access the service with labels app: cockroachdb on any port on default namespace.
Any service account, from any namespace can access the service with labels app: cockroachdb, on port 26257.
In order to make it an AND, you would do this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
to: <- remove the dash from here
- operation:
ports: ["26257"]
Same with the other AuthorizationPolicy object. Also note that you don't need to explicitly create a DENY policy. When you create an ALLOW one, it automatically denies everything else.

How to create a HTTPS route to a Service that is listening on Https with Traefik, and Kubernetes

I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true

Setting up LetEncrypt HTTPS Traefik Ingress for Kubernetes Cluster

I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml