How to fix "Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden" - kubernetes

I have HA proxy ingress in Kubernetes AKS. After upgrading Kubernetes version, I get errors from HA proxy. I tried to solve the problem modifying my old haproxy.yaml to avoid deprecated API's and to get the latest image of HA proxy ingress. But the error persist. How can I fix the errors?.
I also tried this answer, but it doesn't work for me.
I checked this issue on github, but despite I use v0.12-snapshot.3 the error persist.
This is my modified haproxy.yaml:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress:v0.12.1
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "75m"
limits:
memory: "256Mi"
cpu: "500m"
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --reload-strategy=reusesocket
ports:
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: https
port: 443
- name: stat
port: 1936
selector:
run: haproxy-ingress
The following is the output of kubectl logs :
I0307 20:52:16.873675 6 launch.go:215]
Name: HAProxy
Release: v0.12-snapshot.3
Build: git-b34edd0
Repository: https://github.com/jcmoraisjr/haproxy-ingress
I0307 20:52:16.873776 6 launch.go:218] watching for ingress resources with 'kubernetes.io/ingress.class' annotation: haproxy
I0307 20:52:16.873787 6 launch.go:225] watching for ingress resources with IngressClass' controller name: haproxy-ingress.github.io/controller
I0307 20:52:16.873802 6 launch.go:230] ignoring ingress resources without any class reference - --watch-ingress-without-class is false
I0307 20:52:16.873968 6 launch.go:492] Creating API client for https://10.0.0.1:443
I0307 20:52:16.902520 6 launch.go:504] Running in Kubernetes Cluster version v1.17 (v1.17.16) - git (clean) commit d88fadbd65c5e8bde22630d251766a634c7613b0 - platform linux/amd64
I0307 20:52:16.908078 6 launch.go:257] validated default/ingress-default-backend as the default backend
I0307 20:52:18.693995 6 listers.go:134] loading object cache...
E0307 20:52:18.696953 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:19.982962 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:23.089836 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:28.419408 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:37.624105 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
I0307 20:52:45.320562 6 main.go:47] Shutting down with signal terminated
I0307 20:52:45.320631 6 controller.go:208] shutting down controller queues
E0307 20:52:45.320675 6 listers.go:132] initial cache sync has timed out or shutdown has requested
I0307 20:52:45.320711 6 controller.go:87] HAProxy Ingress successfully initialized
I0307 20:52:45.320722 6 main.go:40] Exiting (0)

As per #jesús-lópez comment, upgrading the kubernetes version to 1.18.4 from 1.17 and reinstalling haproxy resolved the issue.

Related

Minikube Ingress address empty - Mac

I added a deployment with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:alpine
ports:
- containerPort: 80
And service with:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 8082
targetPort: 80
Here is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: nginx-testing.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx-service
port:
number: 8082
Before running this I ran:
minikube addons enable ingress -p my-cluster
But when I do kubectl get ingress address is empty
here are the ingress controller pod logs:
E1103 11:01:57.070385 7 leaderelection.go:361] Failed to update lock: configmaps "ingress-controller-leader" is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx"
Am I missing anything here?
I'm trying the basic ingress rule but it's not working.
Can someone help me here, please?
EDIT 1:
Here is the role:
test % kubectl get Role -n ingress-nginx -oyaml ingress-nginx
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"name":"ingress-nginx","namespace":"ingress-nginx"},"rules":[{"apiGroups":[""],"resources":["namespaces"],"verbs":["get"]},{"apiGroups":[""],"resources":["configmaps","pods","secrets","endpoints"],"verbs":["get","list","watch"]},{"apiGroups":[""],"resources":["services"],"verbs":["get","list","watch"]},{"apiGroups":["extensions","networking.k8s.io"],"resources":["ingresses"],"verbs":["get","list","watch"]},{"apiGroups":["extensions","networking.k8s.io"],"resources":["ingresses/status"],"verbs":["update"]},{"apiGroups":["networking.k8s.io"],"resources":["ingressclasses"],"verbs":["get","list","watch"]},{"apiGroups":[""],"resourceNames":["ingress-controller-leader"],"resources":["configmaps"],"verbs":["get","update"]},{"apiGroups":[""],"resources":["configmaps"],"verbs":["create"]},{"apiGroups":[""],"resources":["events"],"verbs":["create","patch"]}]}
creationTimestamp: "2021-11-03T12:29:32Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
namespace: ingress-nginx
resourceVersion: "2487"
uid: e140ea8f-04a5-4bf2-9e26-9803d581f608
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
Solution found by another community member in the comments (their comment is now deleted), hence the CW.
There is a bug in minikube 1.23.0: NGINX Ingress addon Role resource references incorrect ConfigMap #12445
Solution is to upgrade minikube to 1.23.1, or 1.24.

Forbidden resource in API group at the cluster scope

I am unable to identify what the exact issue with the permissions with my setup as shown below. I've looked into all the similar QAs but still unable to solve the issue. The aim is to deploy Prometheus and let it scrape /metrics endpoints that my other applications in the cluster expose fine.
Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"endpoints\" in API group \"\" at the cluster scope"
Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"services\" in API group \"\" at the cluster scope"
...
...
The command below returns no to all services, nodes, pods etc.
kubectl auth can-i get services --as=system:serviceaccount:default:default -n default
Minikube
$ minikube start --vm-driver=virtualbox --extra-config=apiserver.Authorization.Mode=RBAC
😄 minikube v1.14.2 on Darwin 11.2
✨ Using the virtualbox driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🔄 Restarting existing virtualbox VM for "minikube" ...
🐳 Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
▪ apiserver.Authorization.Mode=RBAC
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass, dashboard
🏄 Done! kubectl is now configured to use "minikube" by default
Roles
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-cluster-role
rules:
- apiGroups: [""]
resources: ["nodes", "services", "pods", "endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-service-account
namespace: default
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-cluster-role-binding
roleRef:
kind: ClusterRole
name: monitoring-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: monitoring-service-account
namespace: default
Prometheus
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config-map
namespace: default
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: default
labels:
app: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus:latest
ports:
- name: http
protocol: TCP
containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/prometheus/
- name: storage
mountPath: /prometheus/
volumes:
- name: config
configMap:
name: prometheus-config-map
- name: storage
emptyDir: {}
apiVersion: v1
kind: Service
metadata:
name: prometheus-service
namespace: default
spec:
type: NodePort
selector:
app: prometheus
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
User "system:serviceaccount:default:default" cannot list resource "endpoints" in API group "" at the cluster scope"
User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" at the cluster scope"
User "system:serviceaccount:default:default" cannot list resource "services" in API group "" at the cluster scope"
Something running with ServiceAccount default in namespace default is doing things it does not have permissions for.
apiVersion: v1
kind: ServiceAccount
metadata:
name: monitoring-service-account
Here you create a specific ServiceAccount. You also give it some Cluster-wide permissions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-deployment
namespace: default
You run Prometheus in namespace default but do not specify a specific ServiceAccount, so it will run with ServiceAccount default.
I think your problem is that you are supposed to set the ServiceAccount that you create in the Deployment-manifest for Prometheus.

Kubernetes metrics server API

I am running a Kuberentes cluster in dev environment. I executed deployment files for metrics server, my pod is up and running without any error message. See the output here:
root#master:~/pre-release# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
metrics-server-568697d856-9jshp 1/1 Running 0 10m 10.244.1.5 worker-1 <none> <none>
Next when I am checking API service status, it shows up as below
Name: v1beta1.metrics.k8s.io
Namespace:
Labels: k8s-app=metrics-server
Annotations: <none>
API Version: apiregistration.k8s.io/v1
Kind: APIService
Metadata:
Creation Timestamp: 2021-03-29T17:32:16Z
Resource Version: 39213
UID: 201f685d-9ef5-4f0a-9749-8004d4d529f4
Spec:
Group: metrics.k8s.io
Group Priority Minimum: 100
Insecure Skip TLS Verify: true
Service:
Name: metrics-server
Namespace: pre-release
Port: 443
Version: v1beta1
Version Priority: 100
Status:
Conditions:
Last Transition Time: 2021-03-29T17:32:16Z
Message: failing or missing response from https://10.105.171.253:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.171.253:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Reason: FailedDiscoveryCheck
Status: False
Type: Available
Events: <none>
Here the metric server deployment code
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
Here the complete code
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: pre-release
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: pre-release
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: pre-release
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
- --kubelet-use-node-status-port
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
periodSeconds: 10
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: pre-release
version: v1beta1
versionPriority: 100
latest error
I0330 09:02:31.705767 1 secure_serving.go:116] Serving securely on [::]:4443
E0330 09:04:01.718135 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:worker-2: unable to fetch metrics from Kubelet worker-2 (worker-2): Get https://worker-2:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup worker-2 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:master: unable to fetch metrics from Kubelet master (master): Get https://master:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup master on 10.96.0.10:53: read udp 10.244.2.23:41419->10.96.0.10:53: i/o timeout, unable to fully scrape metrics from source kubelet_summary:worker-1: unable to fetch metrics from Kubelet worker-1 (worker-1): Get https://worker-1:10250/stats/summary?only_cpu_and_memory=true: dial tcp: i/o timeout]
Could someone please help me to fix the issue.
Following container arguments work for me in our development cluster
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
Result of kubectl describe apiservice v1beta1.metrics.k8s.io:
Status:
Conditions:
Last Transition Time: 2021-03-29T19:19:20Z
Message: all checks passed
Reason: Passed
Status: True
Type: Available
Give it a try.
As I mentioned in the comment section, this may be fixed by adding hostNetwork:true to the metrics-server Deployment.
According to kubernetes documentation:
HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.
spec:
hostNetwork: true <---
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
There is an example with information how can you edit your deployment to include that hostNetwork:true in your metrics-server deployment.
Also related github issue.

Unable to create ALBIngressController in Kubernetes environment

I’m creating an Application Load Balancer (AWSALBIngressController-v1.1.6) in the Kubernetes environment. While creating that, for some reason I’m getting the following error -
E1111 06:02:13.117566 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile LB managed SecurityGroup: failed to reconcile managed LoadBalancer securityGroup: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" "controller"="alb-ingress-controller" "request"={"Namespace”:”sampleNamespace”,”Name":"alb-ingress"}
Following are the ALB config files for reference-
ALB Controller Deployment file-
---
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --watch-namespace=sampleNamespace
- --cluster-name=ckuster-xl
- --aws-vpc-id=vpc-3d53e783
- --aws-region=us-east-1
- --default-tags=Name=tag1-xl-ALB,mgr=mgrname
# newer version (v1.1.7) of the alb-ingress-controller image requires iam permission to wafv2
# even when no wafv2 annotation is used
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.6
resources:
requests:
cpu: 100m
memory: 90Mi
limits:
cpu: 200m
memory: 200Mi
serviceAccountName: alb-ingress-controller
RBAC yaml file-
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
I tried few solutions like adding args - --aws-api-debug, - --aws-region args in ALB deployment file and --auto-discover-base-arn , --auto-discover-default-role in kiam server yaml file but it didn't work.

How to create an additional NGNIX ingress controller if there is an existing controller

I have an existing Nginx controller in my EKS cluster. This is a cluster-wide ingress controller. I want to create another Nginx controller to do some testing. This is also going to be a public-facing ingress controller. Is it possible to do that? I tried creating it by creating a new namespace and then creating new resources under that namespace but it started logging the logs for all the ingresses that were already present. Any idea on how to do that?
you need to specify the annotationkubernetes.io/ingress.class: "$INGRESS_CONTROLLER"
for example here you are saying nginx will be responsible for this ingress
kind: Ingress
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
if you do not define a class, your cloud provider may use a default ingress controller.
using-multiple-ingress-controllers
alb.ingress.kubernetes.io/scheme annotation is used for deciding internal or public.
List of ingress Annotation
You can create additional ingress by deploying nginx ingress image as deployment or daemonset. Below are the manifests for this example. Once you done with this, then you should be able to access the this ingress using nodeIP and nodeport. Then onwards having cloud Loadbalancer that is reachable internal and resolving into node IP's should get you working end to end.
Or you may be better off creating service of type loadbalancer in below example
==> config-map.yml <==
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
==> deployment.yml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=${POD_NAMESPACE}/nginx-configuration
env:
- name: POD_NAM
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
==> service-account.yml <==
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers
- virtualserverroutes
verbs:
- list
- watch
- get
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: nginx-ingress
roleRef:
kind: ClusterRole
name: nginx-ingress
apiGroup: rbac.authorization.k8s.io
==> service.yml <==
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
selector:
app: nginx-ingress
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
- port: 443
targetPort: 443
protocol: TCP
name: https