Traefik-ingress dashboard return 404 - kubernetes

I deploy traefik ingress controller pod and then two services, one of them a LoadBalancer type for reverse-proxy and the other a ClusterIP for dashboard.
Also I create ingress for redirect all <elb-address>/dashboard to my traefik dashboard.
but for some reason I get 404 error code when I trying to request my dashboard at aws-ip/dashboard
That is the manifest yamls that I use to set up traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
targetPort: 80
port: 80
type: LoadBalancer
---
kind: Service
apiVersion: v1
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web
Update
I am watching the log and get a the follow errors with rbac activated and the ClusterRole, ServiceRole and ServiceAccount created:
E1124 18:56:23.267560 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list endpoints in the namespace "default"
E1124 18:56:23.648207 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list services in the namespace "default"
E1124 18:56:23.267560 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:traefik-ingress" cannot list endpoints in the namespace "default"
This are my serviceAccount, clusterRole and RoleBingind
kind: ServiceAccount
apiVersion: v1
metadata:
name: traefik-ingress
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress
rules:
- apiGroups:
- ""
resources:
- pods
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress
subjects:
- kind: ServiceAccount
name: traefik-ingress
namespace: default

Solution
I apply this
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
and then installed the stable/traefik template with helm
helm install stable/traefik --name=traefik-ingress-controller --values values.yaml
values.yaml file is:
dashboard:
enabled: true
domain: traefik-ui.k8s.io
rbac:
enabled: true
kubernetes:
namespaces:
- default
- kube-system
Thanks for help

I tried this myself. So basically when you create your Ingress it gets created with a host of traefik-ui.minikube (default), so you won't be able to access the dashboard with <elb-address>/dashboard/.
You will have to access it with traefik-ui.minikube/dashboard/. As an example:
$ kubectl -n kube-system get ingress
NAME HOSTS ADDRESS PORTS AGE
traefik-ingress * 80 8m13s
traefik-web-ui traefik-ui.minikube xxxx.elb.amazonaws.com 80 71d
$ curl -H 'Host: traefik-ui.minikube' xxxx.elb.amazonaws.com/dashboard/
<!doctype html><html class="has-navbar-fixed-top">
...
</html>
You can also add an entry to your /etc/hosts file if you'd like to see it on your browser.
<one-of-the-ips-of-your-elb> traefik-ui.minikube
And you can also use the host to the rules in your Ingress definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: kube-system
name: traefik-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: yourown.hostname.com
http:
paths:
- path: /dashboard
backend:
serviceName: traefik-web-ui
servicePort: web

Just because I ran into this, the docs say:
The trailing slash / in /dashboard/ is mandatory

Related

Kubernetes metrics server API

I am running a Kuberentes cluster in dev environment. I executed deployment files for metrics server, my pod is up and running without any error message. See the output here:
root#master:~/pre-release# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-568697d856-9jshp 1/1 Running 0 10m 10.244.1.5 worker-1 <none> <none>
Next when I am checking API service status, it shows up as below
Name: v1beta1.metrics.k8s.io
Namespace:
Labels: k8s-app=metrics-server
Annotations: <none>
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2021-03-29T17:32:16Z
Resource Version: 39213
UID: 201f685d-9ef5-4f0a-9749-8004d4d529f4
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: pre-release
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2021-03-29T17:32:16Z
Message: failing or missing response from https://10.105.171.253:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.171.253:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
Here the metric server deployment code
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
Here the complete code
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: pre-release
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: pre-release
version: v1beta1
versionPriority: 100
latest error
I0330 09:02:31.705767 1 secure_serving.go:116] Serving securely on [::]:4443
E0330 09:04:01.718135 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:worker-2: unable to fetch metrics from Kubelet worker-2 (worker-2): Get https://worker-2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup worker-2 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:master: unable to fetch metrics from Kubelet master (master): Get https://master:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup master on 10.96.0.10:53: read udp 10.244.2.23:41419->10.96.0.10:53: i/o timeout, unable to fully scrape metrics from source kubelet_summary:worker-1: unable to fetch metrics from Kubelet worker-1 (worker-1): Get https://worker-1:10250/stats/summary?only_cpu_and_memory=true: dial tcp: i/o timeout]
Could someone please help me to fix the issue.
Following container arguments work for me in our development cluster
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
Result of kubectl describe apiservice v1beta1.metrics.k8s.io:
Status:
Conditions:
Last Transition Time: 2021-03-29T19:19:20Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Give it a try.
As I mentioned in the comment section, this may be fixed by adding hostNetwork:true to the metrics-server Deployment.
According to kubernetes documentation:
HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.
spec:
hostNetwork: true <---
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
There is an example with information how can you edit your deployment to include that hostNetwork:true in your metrics-server deployment.
Also related github issue.

Kubernetes API to create a CRD using Minikube, with deployment pod in pending state

I have a problem with Kubernetes API and CRD, while creating a deployment with a single nginx pod, i would like to access using port 80 from a remote server, and locally as well. After seeing the pod in a pending state and running the kubectl get pods and then after around 40 seconds on average, the pod disappears, and then a different nginx pod name is starting up, this seems to be in a loop.
The error is
* W1214 23:27:19.542477 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
I was following this article about service accounts and roles,
https://thorsten-hans.com/custom-resource-definitions-with-rbac-for-serviceaccounts#create-the-clusterrolebinding
I am not even sure i have created this correctly?
Do i even need to create the ServiceAccount_v1.yaml, PolicyRule_v1.yaml and ClusterRoleBinding.yaml files to resolve my error above.
All of my .yaml files for this are below,
CustomResourceDefinition_v1.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
# name must match the spec fields below, and be in the form: <plural>.<group>
name: webservers.stable.example.com
spec:
# group name to use for REST API: /apis/<group>/<version>
group: stable.example.com
names:
# kind is normally the CamelCased singular type. Your resource manifests use this.
kind: WebServer
# plural name to be used in the URL: /apis/<group>/<version>/<plural>
plural: webservers
# shortNames allow shorter string to match your resource on the CLI
shortNames:
- ws
# singular name to be used as an alias on the CLI and for display
singular: webserver
# either Namespaced or Cluster
scope: Cluster
# list of versions supported by this CustomResourceDefinition
versions:
- name: v1
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
# Each version can be enabled/disabled by Served flag.
served: true
# One and only one version must be marked as the storage version.
storage: true
Deployments_v1_apps.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: nginx-deployment
spec:
# 1 Pods should exist at all times.
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 100
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: nginx
spec:
containers:
# Run this image
- image: nginx:1.14
name: nginx
ports:
- containerPort: 80
hostname: nginx
nodeName: webserver01
securityContext:
runAsNonRoot: True
#status:
#availableReplicas: 1
Ingress_v1_networking.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
resource:
kind: nginx-service
name: nginx-deployment
#service:
# name: nginx
# port: 80
#serviceName: nginx
#servicePort: 80
Service_v1_core.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
ServiceAccount_v1.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: user
namespace: example
PolicyRule_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: "example.com:webservers:reader"
rules:
- apiGroups: ["example.com"]
resources: ["ResourceAll"]
verbs: ["VerbAll"]
ClusterRoleBinding_v1.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: "example.com:webservers:cdreader-read"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: "example.com:webservers:reader"
subjects:
- kind: ServiceAccount
name: user
namespace: example

Multiple ingress controller is not working

I'm creating multiple ingress controller in different namespaces. Initially, it's creating a load balancer in AWS and attached pod IP addresses to target groups. After some days it is not updating the new pod IP to the target group. I've attached the ingress controller logs here.
E0712 15:02:30.516295 1 leaderelection.go:270] error retrieving resource lock namespace1/ingress-controller-leader-alb: configmaps "ingress-controller-le │
│ ader-alb" is forbidden: User "system:serviceaccount:namespace1:fc-serviceaccount-icalb" cannot get resource "configmaps" in API group "" in the namespace "namespace1"
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "fc-ingress"
annotations:
kubernetes.io/ingress.class: alb-namespace1
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets:
alb.ingress.kubernetes.io/certificate-arn:
alb.ingress.kubernetes.io/ssl-policy:
alb.ingress.kubernetes.io/security-groups:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '2'
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '5'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=false
alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=false
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=0
alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=300
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=false
labels:
app: fc-label-app-ingress
spec:
rules:
- host: "hostname1.com"
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: "hostname2.com"
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- host: "hostname3.com"
http:
paths:
- backend:
serviceName: service3
servicePort: 80
ingress_controller.yaml
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fc-label-app-icalb
name: fc-ingress-controller-alb
namespace: namespace1
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
spec:
replicas: 1
selector:
matchLabels:
app: fc-label-app-icalb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: fc-label-app-icalb
spec:
containers:
- args:
# Limit the namespace where this ALB Ingress Controller deployment will
# resolve ingress resources. If left commented, all namespaces are used.
- --watch-namespace=namespace1
# Setting the ingress-class flag below ensures that only ingress resources with the
# annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may
# choose any class you'd like for this controller to respect.
- --ingress-class=alb-namespace1
# Name of your cluster. Used when naming resources created
# by the ALB Ingress Controller, providing distinction between
# clusters.
- --cluster-name=$EKS_CLUSTER_NAME
# AWS VPC ID this ingress controller will use to create AWS resources.
# If unspecified, it will be discovered from ec2metadata.
# - --aws-vpc-id=vpc-xxxxxx
# AWS region this ingress controller will operate in.
# If unspecified, it will be discovered from ec2metadata.
# List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region
# - --aws-region=us-west-1
# Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# - ---aws-api-debug
# Maximum number of times to retry the aws calls.
# defaults to 10.
# - --aws-max-retries=10
env:
# AWS key id for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_ACCESS_KEY_ID
# value: KEYVALUE
# AWS key secret for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_SECRET_ACCESS_KEY
# value: SECRETVALUE
# Repository location of the ALB Ingress Controller.
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
imagePullPolicy: Always
name: server
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: fc-serviceaccount-icalb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: fc-label-app-icalb
name: fc-clusterrole-icalb
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: fc-label-app-icalb
name: fc-clusterrolebinding-icalb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fc-clusterrole-icalb
subjects:
- kind: ServiceAccount
name: fc-serviceaccount-icalb
namespace: namespace1
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: fc-label-app-icalb
name: fc-serviceaccount-icalb
namespace: namespace1
I have had an issue like that on AKS. I have two Nginx Ingress Controllers:
external-nginx-ingress
internal-nginx-ingress
Only one worked at a time, Internal or external.
After specifying a unique election-id for each one the problem was fixed.
I use the following HELM chart:
Repository = "https://kubernetes.github.io/ingress-nginx"
Chart = "ingress-nginx"
Chart_version = "4.1.3"
K8s Version = "1.22.4"
Deployment
kubectl get deploy -n ingress
NAME READY UP-TO-DATE AVAILABLE
external-nginx-ingress-controller 3/3 3 3
internal-nginx-ingress-controller 1/1 1 1
IngressClass
kubectl get ingressclass
NAME CONTROLLER PARAMETERS
external-nginx k8s.io/ingress-nginx <none>
internal-nginx k8s.io/internal-ingress-nginx <none>
Deployment for External
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-ingress-controller
namespace: ingress
annotations:
meta.helm.sh/release-name: external-nginx-ingress
meta.helm.sh/release-namespace: ingress
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external-nginx-ingress
app.kubernetes.io/name: ingress-nginx
template:
spec:
containers:
- name: ingress-nginx-external-controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.1
args:
- /nginx-ingress-controller
- >-
--publish-service=$(POD_NAMESPACE)/external-nginx-ingress-controller
- '--election-id=external-ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--ingress-class=external-nginx'
- '--ingress-class-by-name=true'
Deployment for Internal
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-nginx-ingress-controller
namespace: ingress
annotations:
meta.helm.sh/release-name: internal-nginx-ingress
meta.helm.sh/release-namespace: ingress
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: internal-nginx-ingress
app.kubernetes.io/name: ingress-nginx
template:
spec:
containers:
- name: ingress-nginx-internal-controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.1
args:
- /nginx-ingress-controller
- >-
--publish-service=$(POD_NAMESPACE)/internal-nginx-ingress-controller
- '--election-id=internal-ingress-controller-leader'
- '--controller-class=k8s.io/internal-ingress-nginx'
- '--ingress-class=internal-nginx'
- '--ingress-class-by-name=true'

traefik setup on gke not working

I’m trying to get traefik running in GKE, following the user guide (https://docs.traefik.io/user-guide/kubernetes/).
Instead of seeing the dashboard, I get a 404. I guess there’s a problem with the RBAC setup somewhere but I can’t figure it out.
Any help would be greatly appreciated.
The ingress controller log shows a constant flow of (one each second):
E0714 12:19:56.665790 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Service: services is forbidden: User
"system:serviceaccount:kube-system:traefik-ingress-controller" cannot
list services at the cluster scope: Unknown user
"system:serviceaccount:kube-system:traefik-ingress-controller"
and the traefik pod itself constantly spews:
E0714 12:17:45.108356 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden:
User "system:serviceaccount:default:default" cannot list
ingresses.extensions in the namespace "kube-system": Unknown user
"system:serviceaccount:default:default"
E0714 12:17:45.708160 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Service: services is forbidden: User
"system:serviceaccount:default:default" cannot list services in the
namespace "default": Unknown user
"system:serviceaccount:default:default"
E0714 12:17:45.714057 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Endpoints: endpoints is forbidden: User
"system:serviceaccount:default:default" cannot list endpoints in the
namespace "kube-system": Unknown user
"system:serviceaccount:default:default"
E0714 12:17:45.714829 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden:
User "system:serviceaccount:default:default" cannot list
ingresses.extensions in the namespace "default": Unknown user
"system:serviceaccount:default:default"
E0714 12:17:45.715653 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Endpoints: endpoints is forbidden: User
"system:serviceaccount:default:default" cannot list endpoints in the
namespace "default": Unknown user
"system:serviceaccount:default:default"
E0714 12:17:45.716659 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Service: services is forbidden: User
"system:serviceaccount:default:default" cannot list services in the
namespace "kube-system": Unknown user
"system:serviceaccount:default:default"
I created the clusterrole using:
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups: [""]
resources: ["servies", "endpoints", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
and then deployed traefik as deployment:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
when using helm to install traefik I used the following values file:
dashboard:
enabled: true
domain: traefik.example.com
kubernetes:
namespaces:
- default
- kube-system
and finally, for the UI I used the following yaml:
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik.example.com
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
thanks for looking!
(edit: corrected typo in title)
Since the namespace "kube-system" is handled by the Master node, you will not be able to deploy anything on that specific namespace. The Master node within GKE is a managed service and is not accessible to users at this time.
If you would like to have this functionality, then the only suggestion I can provide at this time is to create your own custom cluster from scratch. This will allow you to have access to the Master Node and you would have the option to customize your cluster to your liking.
Edit: I was able to find instructions from github on how to use Traefik as a GKE loadbalancer. I would suggest testing this first before running it in your production cluster.
I think your problem is that you're setting up a ClusterRoleBinding with name "traefik-ingress-controller" and namespace "kube-system" but Traefik is running in namespace default with serviceaccount default.
Try changing your ClusterRoleBinding to:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: default
namespace: default
Or deploy your system with serviceaccount "traefik-ingress-controller" and in namespace "kube-system"

Couldn't access my Kubernetes service via a traefik reverse proxy

I deploy a kubernetes cluster (1.8.8) in an cloud openstack pf (1 master with public ip adress/ 3 nodes). I want to use traefik (last version 1.6.1) as a reverse proxy for accessing my services.
Traefik was well deployed as a daemonset and I can access his GUI on port 8081. My prometheus ingress appears correctly in the traefik interface but I can't access my prometheus server UI.
Could you tell me what I am doing wrong ? Did I miss something ?
Thanks
Ingress of my prometheus:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-svc
servicePort: prom
My daemonset is below:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
hostNetwork: true # workaround
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: traefik:v1.6.1
name: traefik-ingress-lb
imagePullPolicy: Always
volumeMounts:
- mountPath: "/config"
name: "config"
resources:
requests:
cpu: 100m
memory: 20Mi
args:
- --kubernetes
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-conf
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: traefik
data:
traefik.toml: |-
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[web]
address = ":8081"