I am trying to connect to traefik dashboard to localhost. My manifest below will bring up the localhost:port but I only get 404 errors. Now sure how to set the ingress to work locally. The base code was set up to run on AWS NLB, I am trying to setup this to run locally. This manifest below contains the deployment and service for the traefik Kubernetes install.
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-lb
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
# hostNetwork: true
serviceAccountName: traefik-ingress-lb
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.5
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
args:
- --serversTransport.insecureSkipVerify=true
- --providers.kubernetesingress=true
- --providers.kubernetescrd
- --entryPoints.traefik.address=:1080
- --entryPoints.https.address=:443
- --entrypoints.https.http.tls=true
- --entryPoints.https.forwardedHeaders.insecure=true
- --entryPoints.https2.address=:4443
- --entrypoints.https2.http.tls=true
- --entryPoints.https2.forwardedHeaders.insecure=true
- --entryPoints.turn.address=:5349
- --entrypoints.turn.http.tls=true
- --entryPoints.turn.forwardedHeaders.insecure=true
- --api
- --api.insecure
- --accesslog
- --log.level=INFO
- --pilot.dashboard=false
- --entryPoints.http.address=:80
- --entrypoints.http.http.redirections.entryPoint.to=https
- --entrypoints.http.http.redirections.entryPoint.scheme=https
- --entrypoints.http.http.redirections.entrypoint.permanent=true
resources:
limits:
memory: 3072Mi
cpu: 1.5
requests:
memory: 1024Mi
cpu: 1
---
apiVersion: v1
kind: Service
metadata:
name: lb
namespace: kube-system
annotations:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: https2
port: 4443
targetPort: 4443
- name: turn
port: 5349
targetPort: 5349
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: dashboard
port: 1080
targetPort: 1080
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik
namespace: kube-system
# annotations:
# ingress.kubernetes.io/whitelist-x-forwarded-for: "true"
spec:
entryPoints:
- https
# - web
routes:
- kind: Rule
match: "PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
# middlewares:
# - name: internal-ip-whitelist
# namespace: traefik
services:
- kind: Service
name: dashboard
namespace: kube-system
passHostHeader: true
port: 1080
,,,
Your IngressRoute points to the dashboard service in namespace 'kube-system', but the traefik dashboard service is deployed in namespace 'traefik'.
Therefore the route is not working, leading to the 404 in traefik.
Related
I try to setup the ingress with traefik but no luck. I would not use TLS just simply the port 80. I have a service with port 8080. If I curl to that service from inside of the cluster it works well. I get the HTTP/200. But if I would connect to he path externally it doesnt working.
The Traefik dashboard works well on port 8080
Im using the following setup
Traefik: 1.7.7
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: default
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.7.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --web
- --kubernetes
- --logLevel=DEBUG
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: default
annotations:
metallb.universe.tf/address-pool: mmas-ip-space
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
I have a service what running on port 8080 and I created an ingress rule for it. This is a test service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webmust-ing
namespace: default
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- path: /helloservice
backend:
serviceName: hellok8s-service
servicePort: 8080
I get the 404 with curl to /helloservice or try to open from a browser
curl -v http://10.24.33.32/helloservice
curl -v http://10.24.33.32:8080/helloservice
I if curl to the service's ip address directly inside of the cluster, I get the 200/OK
curl -v http://10.100.168.2:8080
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hellok8s-service ClusterIP 10.100.168.2 <none> 8080/TCP 5d7h
cat helloservice.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hellok8s-deployment
labels:
app: hellok8s
spec:
selector:
matchLabels:
app: hellok8s
template:
metadata:
labels:
app: hellok8s
spec:
containers:
- name: hellok8s
image: docker.io/rlkamradt/hellok8s:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hellok8s-service
spec:
type: ClusterIP
selector:
app: hellok8s
ports:
- port: 8080
targetPort: 8080
Finally I fixed the problem with the following change in the ingress.
annotations:
ingress.kubernetes.io/protocol: http
traefik.ingress.kubernetes.io/rule-type: PathPrefixStrip
I am new to Kubernetes, I configured a Ingress and want to access container by minikube ip/path, but it failed to connect.
However, I could access it by using host instead of path, so I thought the problem might be Ingress.
I have no idea how to do, hope someone can help me. Thanks.
Here's my Deployment, Service and Ingress yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-deployment
spec:
replicas: 2
selector:
matchLabels:
app: portainer
template:
metadata:
labels:
app: portainer
spec:
containers:
- name: portainer
image: portainer/portainer:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rancher-deployment
spec:
replicas: 2
selector:
matchLabels:
app: rancher
template:
metadata:
labels:
app: rancher
spec:
containers:
- name: rancher
image: rancher/server:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: portainer-service
spec:
selector:
app: portainer
type: NodePort
ports:
- port: 80
targetPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: rancher-service
spec:
selector:
app: rancher
type: NodePort
ports:
- port: 80
targetPort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: path-based-ingress
spec:
rules:
- http:
paths:
- path: /portainer
backend:
serviceName: portainer-service
servicePort: 80
- path: /rancher
backend:
serviceName: rancher-service
servicePort: 80
Add this to your ingress in metadata section
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
I have deployed JFrog Container Registry to my Kubernetes cluster, which all comes up fine but when I try to access it via browser, it redirects to /ui which returns a 404 but nothing seems to show in the logs.
I have not used the Helm chart as I do not need the nginx or Postgres etc just to try it out.
My deployment is this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jcr
namespace: <REDACTED>
spec:
replicas: 1
template:
metadata:
labels:
app: jcr
spec:
containers:
- name: jcr
image: docker.bintray.io/jfrog/artifactory-jcr:latest
ports:
- containerPort: 8081
volumeMounts:
- name: jcr-data
mountPath: /jcr-data
volumes:
- name: jcr-data
persistentVolumeClaim:
claimName: jcr-data
securityContext:
fsGroup: 2000
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jcr-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: jcr
namespace: <REDACTED>
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8081'
spec:
selector:
app: jcr
ports:
- port: 80
targetPort: 8081
sessionAffinity: None
type: ClusterIP
---
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
labels:
app: jcr
name: jcr
namespace: <REDACTED>
spec:
virtualhost:
fqdn: <REDACTED>
tls:
secretName: jcr-live
routes:
- match: /
services:
- name: jcr
port: 80
Looks like your port configuration is missing some changes.
You need to expose port 8082 in the jcr container, which is now the main UI port
Once port is exposed, you should add this port to your service.
So your revised yaml should look something like (Deployment and Service):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jcr
namespace: <REDACTED>
spec:
replicas: 1
template:
metadata:
labels:
app: jcr
spec:
containers:
- name: jcr
image: docker.bintray.io/jfrog/artifactory-jcr:latest
ports:
- containerPort: 8081
- containerPort: 8082
volumeMounts:
- name: jcr-data
mountPath: /jcr-data
volumes:
- name: jcr-data
persistentVolumeClaim:
claimName: jcr-data
securityContext:
fsGroup: 2000
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jcr-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: jcr
namespace: <REDACTED>
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: /
prometheus.io/port: '8081'
spec:
selector:
app: jcr
ports:
- port: 80
targetPort: 8082
- port: 8081
targetPort: 8081
sessionAffinity: None
type: ClusterIP
Notice I left 8081 open, which allows for direct access to Artifactory if needed for better performance (Artifactory is now running behind a router service).
NOTE - I recommend using the official JFrog Container Registry Helm chart, which greatly simplifies the process of configuring and managing your JCR deployment lifecycle.
When I set up an ingress controller to point to the traefik service, I expect load balancers to be created for that ingress controller on GKE in the same way a LoadBalancer service would. I could then point to the static ip created.
However, when I get my ingresses, there is no static IP assigned.
$ kubectl get ingresses -n kube-system
NAME HOSTS ADDRESS PORTS AGE
traefik-ingress traefik-ui.minikube 80 4m
traefik-ingress.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- path: "/"
backend:
serviceName: traefik-ingress-service
servicePort: 80
traefik-deployment.yml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
You are creating a Service object for the traefik deployment, but you have used the NodePort type, which is only accesible from inside the cluster. If you want Kubernetes to create a LoadBalancer for a Service, you need to specify the type LoadBalancer in your service, so your traefik Service would look like
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
This will talk to the GKE API and create a LoadBalancer with an IP for you.
I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml