How to create an additional NGNIX ingress controller if there is an existing controller - kubernetes

I have an existing Nginx controller in my EKS cluster. This is a cluster-wide ingress controller. I want to create another Nginx controller to do some testing. This is also going to be a public-facing ingress controller. Is it possible to do that? I tried creating it by creating a new namespace and then creating new resources under that namespace but it started logging the logs for all the ingresses that were already present. Any idea on how to do that?

you need to specify the annotationkubernetes.io/ingress.class: "$INGRESS_CONTROLLER"
for example here you are saying nginx will be responsible for this ingress
kind: Ingress
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
if you do not define a class, your cloud provider may use a default ingress controller.
using-multiple-ingress-controllers
alb.ingress.kubernetes.io/scheme annotation is used for deciding internal or public.
List of ingress Annotation

You can create additional ingress by deploying nginx ingress image as deployment or daemonset. Below are the manifests for this example. Once you done with this, then you should be able to access the this ingress using nodeIP and nodeport. Then onwards having cloud Loadbalancer that is reachable internal and resolving into node IP's should get you working end to end.
Or you may be better off creating service of type loadbalancer in below example
==> config-map.yml <==
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
==> deployment.yml <==
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=${POD_NAMESPACE}/nginx-configuration
env:
- name: POD_NAM
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
==> service-account.yml <==
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- update
- create
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- k8s.nginx.org
resources:
- virtualservers
- virtualserverroutes
verbs:
- list
- watch
- get
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-ingress
subjects:
- kind: ServiceAccount
name: nginx-ingress
namespace: nginx-ingress
roleRef:
kind: ClusterRole
name: nginx-ingress
apiGroup: rbac.authorization.k8s.io
==> service.yml <==
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
selector:
app: nginx-ingress
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
- port: 443
targetPort: 443
protocol: TCP
name: https

Related

Error: stat /mnt/jenkins-store: no such file or directory

Very confused, I used an ec2 instance to bootstrap an eks cluster and everything worked completely fine yesterday. I deleted that cluster last night, just spun a new one up and now I'm getting this error when trying to build my jenkins pod
Error: stat /mnt/jenkins-store: no such file or directory
I find it strange how this error didn't show up yesterday and I set everything up the exact same way today. That error is what I got when I described my jenkins pod.
Here's my jenkins.yaml file for reference
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
namespace: default
rules:
- apiGroups: [""]
resources: ["pods","services"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
---
# Allows jenkins to create persistent volumes
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-crb
subjects:
- kind: ServiceAccount
namespace: default
name: jenkins
roleRef:
kind: ClusterRole
name: jenkinsclusterrole
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: jenkinsclusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: default
spec:
selector:
matchLabels:
app: jenkins
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var
subPath: jenkins_home
- name: docker-sock-volume
mountPath: "/var/run/docker.sock"
imagePullPolicy: Always
volumes:
# This allows jenkins to use the docker daemon on the host, for running builds
# see https://stackoverflow.com/questions/27879713/is-it-ok-to-run-docker-from-inside-docker
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock
- name: jenkins-home
hostPath:
path: /mnt/jenkins-store
serviceAccountName: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: default
spec:
type: NodePort
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 31000
- name: jnlp
port: 50000
targetPort: 50000
selector:
app: jenkins
---
Similarly to this answer, check if your ec2-run-instances launched as your new ec2 spawned instance should not include some volumes, which would be then used by the jenkins.yaml.
The lack of such a volume would explain the Error: stat /mnt/jenkins-store message.

Minikube Ingress address empty - Mac

I added a deployment with:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-container
image: nginx:alpine
ports:
- containerPort: 80
And service with:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 8082
targetPort: 80
Here is my ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: nginx-testing.com
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx-service
port:
number: 8082
Before running this I ran:
minikube addons enable ingress -p my-cluster
But when I do kubectl get ingress address is empty
here are the ingress controller pod logs:
E1103 11:01:57.070385 7 leaderelection.go:361] Failed to update lock: configmaps "ingress-controller-leader" is forbidden: User "system:serviceaccount:ingress-nginx:ingress-nginx" cannot update resource "configmaps" in API group "" in the namespace "ingress-nginx"
Am I missing anything here?
I'm trying the basic ingress rule but it's not working.
Can someone help me here, please?
EDIT 1:
Here is the role:
test % kubectl get Role -n ingress-nginx -oyaml ingress-nginx
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"name":"ingress-nginx","namespace":"ingress-nginx"},"rules":[{"apiGroups":[""],"resources":["namespaces"],"verbs":["get"]},{"apiGroups":[""],"resources":["configmaps","pods","secrets","endpoints"],"verbs":["get","list","watch"]},{"apiGroups":[""],"resources":["services"],"verbs":["get","list","watch"]},{"apiGroups":["extensions","networking.k8s.io"],"resources":["ingresses"],"verbs":["get","list","watch"]},{"apiGroups":["extensions","networking.k8s.io"],"resources":["ingresses/status"],"verbs":["update"]},{"apiGroups":["networking.k8s.io"],"resources":["ingressclasses"],"verbs":["get","list","watch"]},{"apiGroups":[""],"resourceNames":["ingress-controller-leader"],"resources":["configmaps"],"verbs":["get","update"]},{"apiGroups":[""],"resources":["configmaps"],"verbs":["create"]},{"apiGroups":[""],"resources":["events"],"verbs":["create","patch"]}]}
creationTimestamp: "2021-11-03T12:29:32Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
namespace: ingress-nginx
resourceVersion: "2487"
uid: e140ea8f-04a5-4bf2-9e26-9803d581f608
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
Solution found by another community member in the comments (their comment is now deleted), hence the CW.
There is a bug in minikube 1.23.0: NGINX Ingress addon Role resource references incorrect ConfigMap #12445
Solution is to upgrade minikube to 1.23.1, or 1.24.

How to fix "Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden"

I have HA proxy ingress in Kubernetes AKS. After upgrading Kubernetes version, I get errors from HA proxy. I tried to solve the problem modifying my old haproxy.yaml to avoid deprecated API's and to get the latest image of HA proxy ingress. But the error persist. How can I fix the errors?.
I also tried this answer, but it doesn't work for me.
I checked this issue on github, but despite I use v0.12-snapshot.3 the error persist.
This is my modified haproxy.yaml:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: default
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: default
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: default
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: default
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress:v0.12.1
imagePullPolicy: Always
resources:
requests:
memory: "64Mi"
cpu: "75m"
limits:
memory: "256Mi"
cpu: "500m"
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --reload-strategy=reusesocket
ports:
- name: https
containerPort: 443
- name: stat
containerPort: 1936
livenessProbe:
httpGet:
path: /healthz
port: 10253
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: default
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: https
port: 443
- name: stat
port: 1936
selector:
run: haproxy-ingress
The following is the output of kubectl logs :
I0307 20:52:16.873675 6 launch.go:215]
Name: HAProxy
Release: v0.12-snapshot.3
Build: git-b34edd0
Repository: https://github.com/jcmoraisjr/haproxy-ingress
I0307 20:52:16.873776 6 launch.go:218] watching for ingress resources with 'kubernetes.io/ingress.class' annotation: haproxy
I0307 20:52:16.873787 6 launch.go:225] watching for ingress resources with IngressClass' controller name: haproxy-ingress.github.io/controller
I0307 20:52:16.873802 6 launch.go:230] ignoring ingress resources without any class reference - --watch-ingress-without-class is false
I0307 20:52:16.873968 6 launch.go:492] Creating API client for https://10.0.0.1:443
I0307 20:52:16.902520 6 launch.go:504] Running in Kubernetes Cluster version v1.17 (v1.17.16) - git (clean) commit d88fadbd65c5e8bde22630d251766a634c7613b0 - platform linux/amd64
I0307 20:52:16.908078 6 launch.go:257] validated default/ingress-default-backend as the default backend
I0307 20:52:18.693995 6 listers.go:134] loading object cache...
E0307 20:52:18.696953 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:19.982962 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:23.089836 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:28.419408 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
E0307 20:52:37.624105 6 reflector.go:127] pkg/mod/k8s.io/client-go#v0.19.0/tools/cache/reflector.go:156: Failed to watch *v1beta1.IngressClass: failed to list *v1beta1.IngressClass: ingressclasses.networking.k8s.io is forbidden: User "system:serviceaccount:default:ingress-controller" cannot list resource "ingressclasses" in API group "networking.k8s.io" at the cluster scope
I0307 20:52:45.320562 6 main.go:47] Shutting down with signal terminated
I0307 20:52:45.320631 6 controller.go:208] shutting down controller queues
E0307 20:52:45.320675 6 listers.go:132] initial cache sync has timed out or shutdown has requested
I0307 20:52:45.320711 6 controller.go:87] HAProxy Ingress successfully initialized
I0307 20:52:45.320722 6 main.go:40] Exiting (0)
As per #jesús-lópez comment, upgrading the kubernetes version to 1.18.4 from 1.17 and reinstalling haproxy resolved the issue.

Unable to create ALBIngressController in Kubernetes environment

I’m creating an Application Load Balancer (AWSALBIngressController-v1.1.6) in the Kubernetes environment. While creating that, for some reason I’m getting the following error -
E1111 06:02:13.117566 1 controller.go:217] kubebuilder/controller "msg"="Reconciler error" "error"="failed to reconcile LB managed SecurityGroup: failed to reconcile managed LoadBalancer securityGroup: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors" "controller"="alb-ingress-controller" "request"={"Namespace”:”sampleNamespace”,”Name":"alb-ingress"}
Following are the ALB config files for reference-
ALB Controller Deployment file-
---
# Application Load Balancer (ALB) Ingress Controller Deployment Manifest.
# This manifest details sensible defaults for deploying an ALB Ingress Controller.
# GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
# Namespace the ALB Ingress Controller should run in. Does not impact which
# namespaces it's able to resolve ingress resource for. For limiting ingress
# namespace scope, see --watch-namespace.
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: alb-ingress-controller
template:
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
spec:
containers:
- name: alb-ingress-controller
args:
- --ingress-class=alb
- --watch-namespace=sampleNamespace
- --cluster-name=ckuster-xl
- --aws-vpc-id=vpc-3d53e783
- --aws-region=us-east-1
- --default-tags=Name=tag1-xl-ALB,mgr=mgrname
# newer version (v1.1.7) of the alb-ingress-controller image requires iam permission to wafv2
# even when no wafv2 annotation is used
image: docker.io/amazon/aws-alb-ingress-controller:v1.1.6
resources:
requests:
cpu: 100m
memory: 90Mi
limits:
cpu: 200m
memory: 200Mi
serviceAccountName: alb-ingress-controller
RBAC yaml file-
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
rules:
- apiGroups:
- ""
- extensions
resources:
- configmaps
- endpoints
- events
- ingresses
- ingresses/status
- services
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- ""
- extensions
resources:
- nodes
- pods
- secrets
- services
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/name: alb-ingress-controller
name: alb-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alb-ingress-controller
subjects:
- kind: ServiceAccount
name: alb-ingress-controller
namespace: kube-system
I tried few solutions like adding args - --aws-api-debug, - --aws-region args in ALB deployment file and --auto-discover-base-arn , --auto-discover-default-role in kiam server yaml file but it didn't work.

Ingress: Connection refused, however, it works from cluster

I have setup Ingress controller:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: {{ template "mychart.fullname" . }}-app
annotations:
# type of authentication [basic|digest]
nginx.ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions
nginx.ingress.kubernetes.io/auth-secret: {{ template "mychart.fullname" . }}-myauthsecret
# message to display with an appropriate context why the authentication is required
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
spec:
rules:
- host: "test.example.com"
http:
paths:
- path: /
backend:
serviceName: {{ template "mychart.fullname" . }}-app
servicePort: 80
But, when I test it, I get connection refused:
curl -H 'Host: test.example.com' http://{public ip}/
When I test it on machine, where cluster run, it works properly:
curl -H 'Host: test.example.com' https://10.96.183.247/
10.96.183.247 is local cluster IP
Thank you for comments, I havent noticed, I had no Nginx ingress controller installed on new baremetal.
Here is missing part, ingress with hostport:
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
---
# tcp-services-configmap
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
---
# udp-services-configmap
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
# rbac start
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
# rbac end
# with-rbac start
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --annotations-prefix=nginx.ingress.kubernetes.io
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
hostPort: 80 # !!!!!!
- name: https
containerPort: 443
hostPort: 443 # !!!!!!
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
runAsNonRoot: false
---
# with-rbac end
# default-backend start
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissible as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
spec:
selector:
app: default-http-backend
ports:
- port: 80
targetPort: 8080
---
# default-backend end