I am having trouble exposing a service over http and https using traefik 2.9 in Kubernetes.
The http endpoint kinda works, I introduced CORS errors somehow once I tried to add https but that is not my main concern. The https ingress is broken and I cant find any indication of why its not working. The traefik pod doesn't log any errors and the dotnet service isn't receiving the requests. Also both routes show up in the dashboard and websecure is displayed as having TLS enabled.
Excluding ClusterRole, ServiceAccount, and ClusterRoleBinding because I believe that's configured correctly as the http route wouldn't work if it wasnt.
Traefik config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
Traefik services:
apiVersion: v1
kind: Service
metadata:
name: traefik-dashboard-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: dashboard
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-service
spec:
type: LoadBalancer
loadBalancerIP: 10.10.1.38
ports:
- targetPort: web
port: 80
name: http
- targetPort: websecure
port: 443
name: https
selector:
app: traefik
Secret for tls:
apiVersion: v1
data:
comptech.pem: <contents of pem file base64 encoded>
comptech.crt: <contents of crt file base64 encoded>
comptech.key: <contents of key file base64 encoded>
kind: Secret
metadata:
name: comptech-cert
namespace: default
type: Opaque
Service for dotnet application:
apiVersion: v1
kind: Service
metadata:
name: control-api-service
spec:
ports:
- name: http
port: 80
targetPort: 5000
protocol: TCP
- name: https
port: 443
targetPort: 5000
protocol: TCP
selector:
app: control-api
Ingresses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: control-api-secure-ingress
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: control-api-service
port:
name: https
tls:
- secretName: comptech-cert
My hope here is that someone with much more experience with traefik/tls will be able to quickly realize what I'm doing incorrectly. Any input is greatly appreciated!
UPDATE:
The firewall was only allowing http traffic, we reconfigured it to support https and it is responding with Traefiks default certs. So i can hit the container but tls is still not configured using my supplied cert.
The pem file is not needed and the crt file was generated incorrectly using openssl the command that worked for me was: openssl crl2pkcs7 -nocrl -certfile comptech.pem | openssl pkcs7 -print_certs -out cert.crt
Pointing to the https port of the control-api-service was not working and needed to be changed to http
A config map needed to be created for the traefik deployment to work correctly:
apiVersion: v1 kind: ConfigMap metadata: name: traefik-config labels:
name: traefik-config namespace: default data: dyn.yaml: |
# https://doc.traefik.io/traefik/https/tls/
tls:
stores:
default:
defaultCertificate:
certFile: '/certs/tls.crt'
keyFile: '/certs/tls.key'
Finally the configmap and secret must be used in the traefik deployment like below:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-deployment
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-account
containers:
- name: traefik
image: traefik:v2.9
args:
- --api.insecure
- --providers.kubernetesingress
- --entrypoints.web.address=:80
- --entrypoints.websecure.address=:443
- --entrypoints.websecure.http.tls
- --providers.file.filename=/config/dyn.yaml
ports:
- name: web
containerPort: 80
- name: dashboard
containerPort: 8080
- name: websecure
containerPort: 443
volumeMounts:
- name: comptech-cert-volume
mountPath: /certs
- name: traefik-config-volume
mountPath: /config
volumes:
- name: comptech-cert-volume
secret:
secretName: comptech-cert
- name: traefik-config-volume
configMap:
name: traefik-config
In my setup, I use the IngressRoute CRD implementation from Traefik.
The CRDs were automatically installed when I setup the Traefik controller using Helm.
Is it a possibility for you to use this in your setup? You can check if the CRDs already exist using below command on your k8s cluster.
kubectl get crd
Below is a snippet from one of my projects where I also use a custom wildcard certificate from a secret using the IngressRoute manifest.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
tls:
secretName: bluecert
You can also include other custom resources that are available from Traefik. The complete set of configuration that is available can be seen here. For example, below is the same snippet with middleware and tlsoptions resources included for improving the security of the endpoint.
apiVersion: traefik.containo.us/v1alpha1
kind: TLSOption
metadata:
name: tlsoptions
spec:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
- TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
- TLS_AES_256_GCM_SHA384
- TLS_AES_128_GCM_SHA256
- TLS_CHACHA20_POLY1305_SHA256
- TLS_FALLBACK_SCSV
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: security
spec:
headers:
frameDeny: true
sslRedirect: true
browserXssFilter: true
contentTypeNosniff: true
stsIncludeSubdomains: true
stsPreload: true
stsSeconds: 31536000
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: blue-api-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: "Host(`blue.domain.com`)" && PathPrefix(`/swagger`)"
kind: Rule
services:
- name: blue-api-svc
port: 80
middlewares:
- name: security
tls:
secretName: bluecert
options:
name: tlsoptions
Related
I am using GKE with ingress-nginx (https://kubernetes.github.io/ingress-nginx/). I tried many tutorials using cert-manager but was unable to learn it.
Could you give me a yaml file as an example if you are able to get SSL working with ingress-nginx in google kubernetes engine?
You can use this as a starting point and expand on it
apiVersion: apps/v1
kind: Deployment
metadata:
name: arecord-depl
spec:
replicas: 1
selector:
matchLabels:
app: arecord
template:
metadata:
labels:
app: arecord
spec:
containers:
- name: arecord
image: gcr.io/clear-shell-346807/arecord
ports:
- containerPort: 8080
env:
- name: PORT
value: "8080"
---
apiVersion: v1
kind: Service
metadata:
name: arecord-srv
spec:
selector:
app: arecord
ports:
- name: arecord
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.global-static-ip-name: ssl-ip
spec:
tls:
- hosts:
- vareniyam.me
secretName: echo-tls
rules:
- host: vareniyam.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: arecord-srv
port:
number:
8080
You have said you're using nginx ingress, but your ingress class is saying gce:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: "gce"
You have not indicated which ClusterIssuer or Issuer you want to use. cert-manager issues certificates only after you tell it you want it to create a certificate
I am unsure what tutorials you have tried, but have you tried looking at the cert-manager docs here: https://cert-manager.io/docs/
I have a kubernetes cluster with a deployment of rabbitmq. I want to expose the rabbitmanagment UI in that way I can access to it in my browser. To do that I have a deployment, service and ingress file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- image: rabbitmq:3.8.9-management
name: rabbitmq
ports:
- containerPort: 5672
- containerPort: 15672
resources: {}
restartPolicy: Always
The service:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
ports:
- name: "5672"
port: 5672
targetPort: 5672
- name: "15672"
port: 15672
targetPort: 15672
selector:
app: rabbitmq
Ingress file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /rabbitmq
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
When I type http://localhost/rabbitmq in my browser I get this nginx error: {"error":"Object Not Found","reason":"Not Found"}
But when I enter in some other pod and I type: curl http://rabbitmq:15672 It get the a response of the website.
Im new to kubernetes, I havent found any relevant solution to my problem, If someone could help me I would very grateful!!
Thanks for reading.
Try:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx # <-- assumed you only have 1 ingress-nginx
rules:
- http:
paths:
- path: /rabbitmq(/|$)(.*)
...
Request to http://localhost/rabbitmq will be seen by your rabbitmq service as /
I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.
I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true
When I test my Spring boot app without docker, I test it with:
https://localhost:8081/points/12345/search
And it works great. I get an error if I use http
Now, I want to deploy it with Kubernetes in local, with url: https://sge-api.local
When I use http, I get the same error as when I don't use docker.
But when I use https, I get:
<html><body><h1>404 Not Found</h1></body></html>
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
selector:
matchLabels:
app: sge-api-local
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: sge-api-local
spec:
containers:
- image: sge_api:local
name: sge-api-local
Here is my ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: sge-ingress
namespace: sge
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: sge-api.local
http:
paths:
- backend:
serviceName: sge-api-local
servicePort: 8081
tls:
- secretName: sge-api-tls-cert
with :
kubectl -n kube-system create secret tls sge-api-tls-cert --key=../certs/privkey.pem --cert=../certs/cert1.pem
Finally, here is my service:
apiVersion: v1
kind: Service
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
ports:
- name: "8081"
port: 8081
selector:
app: sge-api-local
What should I do ?
EDIT:
traefik-config.yml:
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
traefik-deployment:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
traefik-service.yml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
Please make sure that you have enable TLS. Let’s Encrypt is a free TLS Certificate Authority (CA) and you can use it to automatically request and renew Let’s Encrypt certificates for public domain names. Make sure that you have created configmap. Check if you follow every steps during traefik setup: traefik-ingress-controller.
Then you have to assign to which hosts creted secret have to be assigned, egg.
tls:
- secretName: sge-api-tls-cert
hosts:
- sge-api.local
Remember to add specific port assigned to host while executing link.
In your case should be: https://sge-api.local:8081
When using SSL offloading outside of cluster it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available.
You could also add annotations to ingress configuration file:
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
to enable Redirect to another entryPoint for that frontend (e.g. HTTPS).
Let me know if it helps.