Unable to create ALBIngressController in Kubernetes environment - kubernetes

I’m creating an Application Load Balancer (AWSALBIngressController-v1.1.6) in the Kubernetes environment. While creating that, for some reason I’m getting the following error -
E1111 06:02:13.117566 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile LB managed SecurityGroup: failed to reconcile managed LoadBalancer securityGroup: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" "controller"="alb-ingress-controller" "request"={"Namespace”:”sampleNamespace”,”Name":"alb-ingress"}
Following are the ALB config files for reference-
ALB Controller Deployment file-
---
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --watch-namespace=sampleNamespace
- --cluster-name=ckuster-xl
- --aws-vpc-id=vpc-3d53e783
- --aws-region=us-east-1
- --default-tags=Name=tag1-xl-ALB,mgr=mgrname
# newer version (v1.1.7) of the alb-ingress-controller image requires iam permission to wafv2
# even when no wafv2 annotation is used
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.6
resources:
requests:
cpu: 100m
memory: 90Mi
limits:
cpu: 200m
memory: 200Mi
serviceAccountName: alb-ingress-controller
RBAC yaml file-
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
I tried few solutions like adding args - --aws-api-debug, - --aws-region args in ALB deployment file and --auto-discover-base-arn , --auto-discover-default-role in kiam server yaml file but it didn't work.

Related

How to fix "Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden"

I have HA proxy ingress in Kubernetes AKS. After upgrading Kubernetes version, I get errors from HA proxy. I tried to solve the problem modifying my old haproxy.yaml to avoid deprecated API's and to get the latest image of HA proxy ingress. But the error persist. How can I fix the errors?.
I also tried this answer, but it doesn't work for me.
I checked this issue on github, but despite I use v0.12-snapshot.3 the error persist.
This is my modified haproxy.yaml:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress:v0.12.1
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "75m"
limits:
memory: "256Mi"
cpu: "500m"
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --reload-strategy=reusesocket
ports:
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: https
port: 443
- name: stat
port: 1936
selector:
run: haproxy-ingress
The following is the output of kubectl logs :
I0307 20:52:16.873675 6 launch.go:215]
Name: HAProxy
Release: v0.12-snapshot.3
Build: git-b34edd0
Repository: https://github.com/jcmoraisjr/haproxy-ingress
I0307 20:52:16.873776 6 launch.go:218] watching for ingress resources with 'kubernetes.io/ingress.class' annotation: haproxy
I0307 20:52:16.873787 6 launch.go:225] watching for ingress resources with IngressClass' controller name: haproxy-ingress.github.io/controller
I0307 20:52:16.873802 6 launch.go:230] ignoring ingress resources without any class reference - --watch-ingress-without-class is false
I0307 20:52:16.873968 6 launch.go:492] Creating API client for https://10.0.0.1:443
I0307 20:52:16.902520 6 launch.go:504] Running in Kubernetes Cluster version v1.17 (v1.17.16) - git (clean) commit d88fadbd65c5e8bde22630d251766a634c7613b0 - platform linux/amd64
I0307 20:52:16.908078 6 launch.go:257] validated default/ingress-default-backend as the default backend
I0307 20:52:18.693995 6 listers.go:134] loading object cache...
E0307 20:52:18.696953 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:19.982962 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:23.089836 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:28.419408 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:37.624105 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
I0307 20:52:45.320562 6 main.go:47] Shutting down with signal terminated
I0307 20:52:45.320631 6 controller.go:208] shutting down controller queues
E0307 20:52:45.320675 6 listers.go:132] initial cache sync has timed out or shutdown has requested
I0307 20:52:45.320711 6 controller.go:87] HAProxy Ingress successfully initialized
I0307 20:52:45.320722 6 main.go:40] Exiting (0)
As per #jesús-lópez comment, upgrading the kubernetes version to 1.18.4 from 1.17 and reinstalling haproxy resolved the issue.

Unable to find Prometheus custom app exporter as a target in Prometheus server deployed in Kubernetes

I created a custom exporter in Python using prometheus-client package. Then created necessary artifacts to find the metric as a target in Prometheus deployed on Kubernetes.
But I am unable to see the metric as a target despite following all available instructions.
Help in finding the problem is appreciated.
Here is the summary of what I did.
Installed Prometheus using Helm on the K8s cluster in a namespace prometheus
Created a python program with prometheus-client package to create a metric
Created and deployed an image of the exporter in dockerhub
Created a deployment against the metrics image, in a namespace prom-test
Created a Service, ServiceMonitor, and a ServiceMonitorSelector
Created a service account, role and binding to enable access to the end point
Following is the code.
Service & Deployment
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: ClusterIP
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
#serviceAccount: test-app-exporter-sa
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Service account and role binding
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app-exporter-sa
namespace: prom-test
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-binding
subjects:
- kind: ServiceAccount
name: test-app-exporter-sa
namespace: prom-test
roleRef:
kind: ClusterRole
name: test-app-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
Service Monitor & Selector
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: prometheus
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- prom-test
#any: true
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: prometheus
spec:
serviceAccountName: test-app-exporter-sa
serviceMonitorSelector:
matchLabels:
app: test-app-exporter-sm
release: prometheus
resources:
requests:
memory: 400Mi
I am able to get the target identified by Prometheus.
But though the end point can be reached within the cluster as well as from the node IP. Prometheus says the target is down.
In addition to that I am unable to see any other target.
Prom-UI
Any help is greatly appreciated
Following is my changed code
Deployment & Service
apiVersion: v1
kind: Namespace
metadata:
name: prom-test
---
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: NodePort
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Cluster Roles
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: prom-test
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: prom-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
Service Monitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: monitoring
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- prom-test
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: monitoring
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
serviceMonitorSelector:
matchLabels:
app: test-app-exporter
release: prometheus
resources:
requests:
memory: 400Mi

how to deploy a statefulset when a pod security policy is in place in kubernetes

I am trying to play around with PodSecurityPolicies in kubernetes so pods can't be created if they are using the root user.
This is my psp definition:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.restrictive
spec:
hostNetwork: false
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: MustRunAsNonRoot
fsGroup:
rule: RunAsAny
volumes:
- '*'
and this is my statefulset definition
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
securityContext:
#only takes integers.
runAsUser: 1000
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
When trying to create this statefulset I get
create Pod web-0 in StatefulSet web failed error: pods "web-0" is forbidden: unable to validate against any pod security policy:
It doesn't specify what policy am I violating, and since I am specifying I want to run this on user 1000, I am not running this as root (Hence my understanding is that this statefulset pod definition is not violating any rules defined in the PSP). There is no USER specified in the Dockerfile used for this image.
The other weird part, is that this works fine for standard pods (kind: Pod, instead of kind:Statefulset), for example, this works just fine, when the same PSP exists:
apiVersion: v1
kind: Pod
metadata:
name: my-nodejs
spec:
securityContext:
runAsUser: 1000
containers:
- name: my-node
image: node
ports:
- name: web
containerPort: 80
protocol: TCP
command:
- /bin/sh
- -c
- |
npm install http-server-g
npx http-server
What am I missing / doing wrong?
You seems to have forgotten about binding this psp to a service account.
You need to apply the following:
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- eks.restrictive
EOF
cat << EOF | kubectl apply -f-
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: psp-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: psp-role
subjects:
- kind: ServiceAccount
name: default
namespace: default
EOF
If you dont want to use the default account you can create a separate service account and bind the role to it.
Read more about it k8s documentation - pod-security-policy.

Multiple ingress controller is not working

I'm creating multiple ingress controller in different namespaces. Initially, it's creating a load balancer in AWS and attached pod IP addresses to target groups. After some days it is not updating the new pod IP to the target group. I've attached the ingress controller logs here.
E0712 15:02:30.516295 1 leaderelection.go:270] error retrieving resource lock namespace1/ingress-controller-leader-alb: configmaps "ingress-controller-le │
│ ader-alb" is forbidden: User "system:serviceaccount:namespace1:fc-serviceaccount-icalb" cannot get resource "configmaps" in API group "" in the namespace "namespace1"
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "fc-ingress"
annotations:
kubernetes.io/ingress.class: alb-namespace1
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/subnets:
alb.ingress.kubernetes.io/certificate-arn:
alb.ingress.kubernetes.io/ssl-policy:
alb.ingress.kubernetes.io/security-groups:
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: '/'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '2'
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '5'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=false
alb.ingress.kubernetes.io/load-balancer-attributes: deletion_protection.enabled=false
alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
alb.ingress.kubernetes.io/target-group-attributes: slow_start.duration_seconds=0
alb.ingress.kubernetes.io/target-group-attributes: deregistration_delay.timeout_seconds=300
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=false
labels:
app: fc-label-app-ingress
spec:
rules:
- host: "hostname1.com"
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: "hostname2.com"
http:
paths:
- backend:
serviceName: service2
servicePort: 80
- host: "hostname3.com"
http:
paths:
- backend:
serviceName: service3
servicePort: 80
ingress_controller.yaml
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fc-label-app-icalb
name: fc-ingress-controller-alb
namespace: namespace1
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
spec:
replicas: 1
selector:
matchLabels:
app: fc-label-app-icalb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: fc-label-app-icalb
spec:
containers:
- args:
# Limit the namespace where this ALB Ingress Controller deployment will
# resolve ingress resources. If left commented, all namespaces are used.
- --watch-namespace=namespace1
# Setting the ingress-class flag below ensures that only ingress resources with the
# annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may
# choose any class you'd like for this controller to respect.
- --ingress-class=alb-namespace1
# Name of your cluster. Used when naming resources created
# by the ALB Ingress Controller, providing distinction between
# clusters.
- --cluster-name=$EKS_CLUSTER_NAME
# AWS VPC ID this ingress controller will use to create AWS resources.
# If unspecified, it will be discovered from ec2metadata.
# - --aws-vpc-id=vpc-xxxxxx
# AWS region this ingress controller will operate in.
# If unspecified, it will be discovered from ec2metadata.
# List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region
# - --aws-region=us-west-1
# Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# - ---aws-api-debug
# Maximum number of times to retry the aws calls.
# defaults to 10.
# - --aws-max-retries=10
env:
# AWS key id for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_ACCESS_KEY_ID
# value: KEYVALUE
# AWS key secret for authenticating with the AWS API.
# This is only here for examples. It's recommended you instead use
# a project like kube2iam for granting access.
#- name: AWS_SECRET_ACCESS_KEY
# value: SECRETVALUE
# Repository location of the ALB Ingress Controller.
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.4
imagePullPolicy: Always
name: server
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: fc-serviceaccount-icalb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: fc-label-app-icalb
name: fc-clusterrole-icalb
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: fc-label-app-icalb
name: fc-clusterrolebinding-icalb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fc-clusterrole-icalb
subjects:
- kind: ServiceAccount
name: fc-serviceaccount-icalb
namespace: namespace1
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: fc-label-app-icalb
name: fc-serviceaccount-icalb
namespace: namespace1
I have had an issue like that on AKS. I have two Nginx Ingress Controllers:
external-nginx-ingress
internal-nginx-ingress
Only one worked at a time, Internal or external.
After specifying a unique election-id for each one the problem was fixed.
I use the following HELM chart:
Repository = "https://kubernetes.github.io/ingress-nginx"
Chart = "ingress-nginx"
Chart_version = "4.1.3"
K8s Version = "1.22.4"
Deployment
kubectl get deploy -n ingress
NAME READY UP-TO-DATE AVAILABLE
external-nginx-ingress-controller 3/3 3 3
internal-nginx-ingress-controller 1/1 1 1
IngressClass
kubectl get ingressclass
NAME CONTROLLER PARAMETERS
external-nginx k8s.io/ingress-nginx <none>
internal-nginx k8s.io/internal-ingress-nginx <none>
Deployment for External
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-nginx-ingress-controller
namespace: ingress
annotations:
meta.helm.sh/release-name: external-nginx-ingress
meta.helm.sh/release-namespace: ingress
spec:
replicas: 3
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: external-nginx-ingress
app.kubernetes.io/name: ingress-nginx
template:
spec:
containers:
- name: ingress-nginx-external-controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.1
args:
- /nginx-ingress-controller
- >-
--publish-service=$(POD_NAMESPACE)/external-nginx-ingress-controller
- '--election-id=external-ingress-controller-leader'
- '--controller-class=k8s.io/ingress-nginx'
- '--ingress-class=external-nginx'
- '--ingress-class-by-name=true'
Deployment for Internal
apiVersion: apps/v1
kind: Deployment
metadata:
name: internal-nginx-ingress-controller
namespace: ingress
annotations:
meta.helm.sh/release-name: internal-nginx-ingress
meta.helm.sh/release-namespace: ingress
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: internal-nginx-ingress
app.kubernetes.io/name: ingress-nginx
template:
spec:
containers:
- name: ingress-nginx-internal-controller
image: >-
k8s.gcr.io/ingress-nginx/controller:v1.2.1
args:
- /nginx-ingress-controller
- >-
--publish-service=$(POD_NAMESPACE)/internal-nginx-ingress-controller
- '--election-id=internal-ingress-controller-leader'
- '--controller-class=k8s.io/internal-ingress-nginx'
- '--ingress-class=internal-nginx'
- '--ingress-class-by-name=true'

Kubernetes - RBAC issue with ingress controller

I'm following a tutorial by Diego Martínez, outlining how to use an ingress controller with SSL on K8s. Everything works fine, with the exception of an RBAC error:
It seems the cluster it is running with Authorization enabled (like RBAC) and there is no permissions for the ingress controller. Please check the configuration
Does anyone know how I can grant RBAC permissions to this resource?
I'm running on Google Cloud, and for reference, below is the ingress deployment spec
If you are deploying nginx-ingress, perhaps the nginx-ingress Helm chart is a simpler way to do it.
You can follow the guide on the nginx-ingress documentation installation on RBAC-enabled clusters.
Specifically addressing your question regarding adding the RBAC permissions, you will need to add something like:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx