Kubernetes Dashboard Ingress returning empty response from server - kubernetes

I am trying to set up the kubernetes dashboard. I have enabled the custom ssl certs from my domain and can curl the pod directly with no issues - i can curl the service and it works with no issues. However, when I try to access via ingress I get (52) empty response from server. I have an NLB forwarding to the port of nginx controller service (ingress works fine with another app). Here is my ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
labels:
app: dashboard
name: dashboard-ingress
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: k8sdash.domain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
Here is the Daemonset config for my ingress controllers.
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "3"
creationTimestamp: "2020-05-19T15:48:13Z"
generation: 3
labels:
app: lb
app.kubernetes.io/component: controller
chart: nginx-ingress-1.36.3
heritage: Tiller
release: lb
name: lb-controller
namespace: kube-system
resourceVersion: "747622"
selfLink: /apis/apps/v1/namespaces/kube-system/daemonsets/lb-controller
uid: 19d830ba-f2d9-4c6f-bc8d-d64667a900c7
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: lb
release: lb
template:
metadata:
creationTimestamp: null
labels:
app: lb
app.kubernetes.io/component: controller
component: controller
release: lb
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/lb-default-backend
- --publish-service=kube-system/lb-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/lb-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: lb-controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: lb
serviceAccountName: lb
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 3

Related

Kubernetes: Cannot connect to service when using named targetPort

Here's my config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
template:
metadata:
labels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
svc: app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: server
image: server
ports:
- name: http-port
containerPort: 3000
resources:
limits:
memory: 128Mi
requests:
memory: 36Mi
envFrom:
- secretRef:
name: db-env
- secretRef:
name: oauth-env
startupProbe:
httpGet:
port: http
path: /
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 10
livenessProbe:
httpGet:
port: http
path: /
periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
ports:
- port: 80
targetPort: http-port
When I try that I can't connect to my site. When I change targetPort: http-port back to targetPort: 3000 it works fine. I thought the point of naming my port was so that I could use it in the targetPort. Does it not work with deployments?

Configure Ingress-Nginix in Cluster Kubernetes in 2 Namespace

Good afternoon
I am working with ingress-nginx for service exposure in an on-premise kubernetes cluster. In this cluster we manage 2 Environment: Development (DEV) and Quality (QA).
What we want is to somehow have 1 ingress-nginx for each environment (DEV and QA), but so far I have not been able to configure it, I am applying the following configuration but I cannot do that for the IP indicated in the controller between the requests according to the environment, example:
DEV environment
controller-deployment-dev.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcold016
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-dev.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.169.12
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
rules ingress dev
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-develop
namespace: develop
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
To consume services in the development environment we use the ip indicated in the controller-deployment-dev and port 30000
https://10.161.169.12:30000/api/FindLimitedPackageBS
https://10.161.169.12:30000/api/FindComplementaryAccountInfo
QA environment
For the quality environment I have the following configuration, very similar to that of develop, only with a different IP:
controller-deployment-qa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-tcold
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcolt022
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-qa.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-qa
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.173.45
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
rules ingress qa
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-calidad
namespace: calidad
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
And so you should be able to consult the services in this environment, with respect to development you should only change the IP:
https://10.161.173.45:30000/api/FindLimitedPackageBS
https://10.161.173.45:30000/api/FindComplementaryAccountInfo
Is there any way to do what I indicate through ingress-nginx, with the condition that it is required to maintain the same rules for the services but in different namespaces
Update
I managed to find a solution through the following very good documentation:
https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
You can achieve this use case by using Ingress Classes. Ingresses can be implemented by different controllers, often with different configurations. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
You can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName. Ensure the --controller-class= and --ingress-class are set to something different on each ingress controller.
Firstly, specify '--controller-class=k8s.io/internal-ingress-nginx' and '--ingress-class=k8s.io/internal-nginx' in the ingress-nginx deployment. Then use the same value of controller-class in the IngressClass. And refer to that IngressClass in your Ingress using ingressClassName. Refer Multiple Ingress Controllers for more information on how to set Ingress class.
Note: If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.
You can configure the Ingress Controller to handle configuration resources only from a particular namespace, which is controlled through the -watch-namespace command-line argument. This can be useful if you want to use different NGINX Ingress Controllers for different applications, both in terms of isolation and/or operation.
-watch-namespace is used to watch Namespace for Ingress resources. By default the Ingress Controller watches all namespaces.
Refer Running Multiple Ingress Controllers for more information.

Helm 3, ingress-nginx-controller not updating after changing values in helm resource

I am using helm to configure my local dev env. My configuration worked pre-Kubernetes v1.22, but due to the recent Docker Desktop updates, I'd now like to support Kubernetes v1.22.
I have followed Kubernetes-specific migration examples, but I am encountering issues when attempting to upgrade my ingress-nginx-controller. I was using ingress-nginx v0.45, but based on their version support table here it looks like I need at least v1.0 in order to work with Kubernetes v1.22
Looking at my ingress-nginx-controller logs in docker desktop I can see the following after running a helm install:
My deployment resource looked like this:
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.45.0#sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --default-ssl-certificate=$(POD_NAMESPACE)/rs-ssl-secret
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/configmap-nginx
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
I then updated the version numbers:
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.0.5#sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --default-ssl-certificate=$(POD_NAMESPACE)/rs-ssl-secret
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/configmap-nginx
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
After making the changes, and running a helm uninstall followed by a helm install I get the exact same logs, indicating that the nginx-ingress-controller is still using v0.45.0
Is there something I'm missing? It almost feels like the changes I'm making to my deployment resource aren't appearing in my install. Apologies if I'm missing something obvious, Any help would be massively appreciated, I'm not the most experienced Helm/Kubernetes user
Usually, you would change the version by specifying this in your installation command.
For example
helm install ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--version 4.0.15 \
--generate-name
--version string specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used
You can check out the docs on helm install. Maybe its also worth checking the offical installation guide.
If you want to have that chart locally for some reason, download it from git.
curl -fsSLO https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.0.15/ingress-nginx-4.0.15.tgz
tar -xf ingress-nginx-4.0.15.tgz
You can also add the repo locally.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx/ingress-nginx --version 4.0.15 ...

how to make traefik to bind host server's 80 and 443 port when using deployment type

I am using traefik 2.2.1 as my cluster's entrypoint using deployment type, this is my deployment config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik
namespace: kube-system
selfLink: /apis/apps/v1/namespaces/kube-system/deployments/traefik
uid: ddee327d-8570-44be-ab8d-06cb440187f4
resourceVersion: '335024'
generation: 12
creationTimestamp: '2020-06-04T07:37:20Z'
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-8.2.1
annotations:
deployment.kubernetes.io/revision: '7'
meta.helm.sh/release-name: traefik
meta.helm.sh/release-namespace: kube-system
spec:
replicas: 4
selector:
matchLabels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/name: traefik
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: traefik
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-8.2.1
spec:
volumes:
- name: data
emptyDir: {}
containers:
- name: traefik
image: 'traefik:2.2.1'
args:
- '--global.checknewversion'
- '--global.sendanonymoususage'
- '--entryPoints.traefik.address=:9000'
- '--entryPoints.web.address=:80'
- '--entryPoints.websecure.address=:443'
- '--api.dashboard=true'
- '--ping=true'
- '--providers.kubernetescrd'
- '--providers.kubernetesingress'
ports:
- name: traefik
containerPort: 9000
protocol: TCP
- name: web
containerPort: 8000
protocol: TCP
- name: websecure
containerPort: 8443
protocol: TCP
resources: {}
volumeMounts:
- name: data
mountPath: /data
livenessProbe:
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /ping
port: 9000
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 10
successThreshold: 1
failureThreshold: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
drop:
- ALL
runAsUser: 65532
runAsGroup: 65532
runAsNonRoot: true
readOnlyRootFilesystem: true
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
serviceAccountName: traefik
serviceAccount: traefik
securityContext:
fsGroup: 65532
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 12
replicas: 5
updatedReplicas: 2
readyReplicas: 3
availableReplicas: 3
unavailableReplicas: 2
conditions:
- type: Available
status: 'True'
lastUpdateTime: '2020-06-04T08:41:03Z'
lastTransitionTime: '2020-06-04T08:41:03Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
- type: Progressing
status: 'True'
lastUpdateTime: '2020-06-04T10:57:35Z'
lastTransitionTime: '2020-06-04T10:48:40Z'
reason: ReplicaSetUpdated
message: ReplicaSet "traefik-dd74b59b" is progressing.
my question is: is it possible to make the treafik listening host's 80 and 443 port? If possible, how to make it? or should I change my deployment type to daemon set? if not, I have to deployment a nginx in each node to forward traffic.
Add hostNetwork: true in the spec. This makes the pod use host's network namespace.
...
spec:
hostNetwork: true
containers:
- name: traefik
...

Accessing kubernetes headless service over ambassador

I have deployed my service as headless server and did follow the kubernetes configuration as mentioned in this link (http://vertx.io/docs/vertx-hazelcast/java/#_using_this_cluster_manager). My service is load balanced and proxied using ambassador. Everything was working fine as long as the service was not headless. Once the service changed to headless, ambassador is not able to discover my services. Which means it was looking for clusterIP and it is missing now as the services are headless. What is that I need to include in my deployment.yaml so these services are discovered by ambassador.
Error I see " upstream connect error or disconnect/reset before headers. reset reason: connection failure"
I need these services to be headless because that is the only way to create a cluster using hazelcast. And I am creating web socket connection and vertx eventbus.
apiVersion: v1
kind: Service
metadata:
name: abt-login-service
labels:
chart: "abt-login-service-0.1.0-SNAPSHOT"
annotations:
fabric8.io/expose: "true"
fabric8.io/ingress.annotations: 'kubernetes.io/ingress.class: nginx'
getambassador.io/config: |
---
apiVersion: ambassador/v1
name: login_mapping
ambassador_id: default
kind: Mapping
prefix: /login/
service: abt-login-service.default.svc.cluster.local
use_websocket: true
spec:
type: ClusterIP
clusterIP: None
selector:
app: RELEASE-NAME-abt-login-service
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
- name: hz-port-name
port: 5701
protocol: TCP```
```Deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: RELEASE-NAME-abt-login-service
labels:
draft: draft-app
chart: "abt-login-service-0.1.0-SNAPSHOT"
spec:
replicas: 2
selector:
matchLabels:
app: RELEASE-NAME-abt-login-service
minReadySeconds: 30
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
draft: draft-app
app: RELEASE-NAME-abt-login-service
component: abt-login-service
spec:
serviceAccountName: vault-auth
containers:
- name: abt-login-service
env:
- name: SPRING_PROFILES_ACTIVE
value: "dev"
- name: _JAVA_OPTIONS
value: "-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2 -Dsun.zip.disableMemoryMapping=true -XX:+UseParallelGC -XX:Min
HeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90 -Dhazelcast.diagnostics.enabled=true
"
image: "draft:dev"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
ports:
- containerPort: 5701
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /health
port: 8080
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 400m
memory: 512Mi
terminationGracePeriodSeconds: 10```
How can I make these services discoverable by ambassador?