Can't read ingress-nginx controller pod's logs after configuring filebeat - kubernetes

currently we are working on persist ingress-nginx logs,we implemented filebeat sidecar put forlogs to logstash,but we cannot read acces.log and error.log this path{/var/log/nginx/access.log} now we required read logs for this path, please give solution for this issue
ingress-nginx deployed manifest files like this
filebeat configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-configmap
namespace: ingress-nginx
data:
filebeat.yml: |
filebeat:
config:
modules:
path: /usr/share/filebeat/modules.d/*.yml
reload:
enabled: true
modules:
- module: nginx
access:
var.paths: ["/var/log/nginx/access.log*"]
error:
var.paths: ["/var/log/nginx/error.log*"]
output:
logstash:
hosts: ["logstash-logstash-headless:9600"]
loadbalance: true
deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.10.1
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.41.2
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.41.2#sha256:1f4f402b9c14f3ae92b11ada1dfe9893a88f0faeb0b2f4b903e2c67a0c3bf0de
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --annotations-prefix=nginx.ingress.kubernetes.io
- --enable-ssl-passthrough
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
- name: prometheus
containerPort: 10254
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
- name: nginx-logs
mountPath: var/log/nginx
resources:
requests:
cpu: 100m
memory: 90Mi
- name: filebeat-nginx
image: docker.elastic.co/beats/filebeat:7.13.0
volumeMounts:
- name: nginx-logs
mountPath: var/log/nginx
- name: filebeat-config
mountPath: /usr/share/filebeat/filebeat.yml
subPath: filebeat.yml
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: nginx-logs
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
- name: filebeat-config
configMap:
name: filebeat-configmap
items:
- key: filebeat.yml
path: filebeat.yml

It looks like there is a typo in the file path in the following code:
- name: nginx-logs
mountPath: var/log/nginx
resources:
requests:
cpu: 100m
memory: 90Mi
- name: filebeat-nginx
image: docker.elastic.co/beats/filebeat:7.13.0
volumeMounts:
- name: nginx-logs
mountPath: var/log/nginx
The file paths need to be changed from var/log/nginx to /var/log/nginx (forward slash in the beginning of the path)

Related

Configure Ingress-Nginix in Cluster Kubernetes in 2 Namespace

Good afternoon
I am working with ingress-nginx for service exposure in an on-premise kubernetes cluster. In this cluster we manage 2 Environment: Development (DEV) and Quality (QA).
What we want is to somehow have 1 ingress-nginx for each environment (DEV and QA), but so far I have not been able to configure it, I am applying the following configuration but I cannot do that for the IP indicated in the controller between the requests according to the environment, example:
DEV environment
controller-deployment-dev.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcold016
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-dev.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.169.12
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
rules ingress dev
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-develop
namespace: develop
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
To consume services in the development environment we use the ip indicated in the controller-deployment-dev and port 30000
https://10.161.169.12:30000/api/FindLimitedPackageBS
https://10.161.169.12:30000/api/FindComplementaryAccountInfo
QA environment
For the quality environment I have the following configuration, very similar to that of develop, only with a different IP:
controller-deployment-qa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-tcold
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcolt022
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-qa.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-qa
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.173.45
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
rules ingress qa
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-calidad
namespace: calidad
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
And so you should be able to consult the services in this environment, with respect to development you should only change the IP:
https://10.161.173.45:30000/api/FindLimitedPackageBS
https://10.161.173.45:30000/api/FindComplementaryAccountInfo
Is there any way to do what I indicate through ingress-nginx, with the condition that it is required to maintain the same rules for the services but in different namespaces
Update
I managed to find a solution through the following very good documentation:
https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
You can achieve this use case by using Ingress Classes. Ingresses can be implemented by different controllers, often with different configurations. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
You can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName. Ensure the --controller-class= and --ingress-class are set to something different on each ingress controller.
Firstly, specify '--controller-class=k8s.io/internal-ingress-nginx' and '--ingress-class=k8s.io/internal-nginx' in the ingress-nginx deployment. Then use the same value of controller-class in the IngressClass. And refer to that IngressClass in your Ingress using ingressClassName. Refer Multiple Ingress Controllers for more information on how to set Ingress class.
Note: If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.
You can configure the Ingress Controller to handle configuration resources only from a particular namespace, which is controlled through the -watch-namespace command-line argument. This can be useful if you want to use different NGINX Ingress Controllers for different applications, both in terms of isolation and/or operation.
-watch-namespace is used to watch Namespace for Ingress resources. By default the Ingress Controller watches all namespaces.
Refer Running Multiple Ingress Controllers for more information.

Helm 3, ingress-nginx-controller not updating after changing values in helm resource

I am using helm to configure my local dev env. My configuration worked pre-Kubernetes v1.22, but due to the recent Docker Desktop updates, I'd now like to support Kubernetes v1.22.
I have followed Kubernetes-specific migration examples, but I am encountering issues when attempting to upgrade my ingress-nginx-controller. I was using ingress-nginx v0.45, but based on their version support table here it looks like I need at least v1.0 in order to work with Kubernetes v1.22
Looking at my ingress-nginx-controller logs in docker desktop I can see the following after running a helm install:
My deployment resource looked like this:
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.27.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.45.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.45.0#sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --default-ssl-certificate=$(POD_NAMESPACE)/rs-ssl-secret
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/configmap-nginx
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
I then updated the version numbers:
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.0.5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: {{ .Values.namespace }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v1.0.5#sha256:c4390c53f348c3bd4e60a5dd6a11c35799ae78c49388090140b9d72ccede1755
imagePullPolicy: Always
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/nginx-default-backend
- --default-ssl-certificate=$(POD_NAMESPACE)/rs-ssl-secret
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/configmap-nginx
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
After making the changes, and running a helm uninstall followed by a helm install I get the exact same logs, indicating that the nginx-ingress-controller is still using v0.45.0
Is there something I'm missing? It almost feels like the changes I'm making to my deployment resource aren't appearing in my install. Apologies if I'm missing something obvious, Any help would be massively appreciated, I'm not the most experienced Helm/Kubernetes user
Usually, you would change the version by specifying this in your installation command.
For example
helm install ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--version 4.0.15 \
--generate-name
--version string specify a version constraint for the chart version to use. This constraint can be a specific tag (e.g. 1.1.1) or it may reference a valid range (e.g. ^2.0.0). If this is not specified, the latest version is used
You can check out the docs on helm install. Maybe its also worth checking the offical installation guide.
If you want to have that chart locally for some reason, download it from git.
curl -fsSLO https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.0.15/ingress-nginx-4.0.15.tgz
tar -xf ingress-nginx-4.0.15.tgz
You can also add the repo locally.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx/ingress-nginx --version 4.0.15 ...

Zonal network endpoint group unhealthy even though that container application working properly

I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?
I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.
# [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
Apparently, you didn’t have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. Creating an ingress firewall rule with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.
I'm still not sure why, but i've managed to work when moved the service to port 80 and kept the health check on 5000.
Service config:
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
Backend config:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/

Kubernetes Dashboard Ingress returning empty response from server

I am trying to set up the kubernetes dashboard. I have enabled the custom ssl certs from my domain and can curl the pod directly with no issues - i can curl the service and it works with no issues. However, when I try to access via ingress I get (52) empty response from server. I have an NLB forwarding to the port of nginx controller service (ingress works fine with another app). Here is my ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
labels:
app: dashboard
name: dashboard-ingress
name: dashboard-ingress
namespace: kubernetes-dashboard
spec:
rules:
- host: k8sdash.domain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
Here is the Daemonset config for my ingress controllers.
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "3"
creationTimestamp: "2020-05-19T15:48:13Z"
generation: 3
labels:
app: lb
app.kubernetes.io/component: controller
chart: nginx-ingress-1.36.3
heritage: Tiller
release: lb
name: lb-controller
namespace: kube-system
resourceVersion: "747622"
selfLink: /apis/apps/v1/namespaces/kube-system/daemonsets/lb-controller
uid: 19d830ba-f2d9-4c6f-bc8d-d64667a900c7
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: lb
release: lb
template:
metadata:
creationTimestamp: null
labels:
app: lb
app.kubernetes.io/component: controller
component: controller
release: lb
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=kube-system/lb-default-backend
- --publish-service=kube-system/lb-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=kube-system/lb-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: lb-controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: lb
serviceAccountName: lb
terminationGracePeriodSeconds: 60
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 3
desiredNumberScheduled: 3
numberAvailable: 3
numberMisscheduled: 0
numberReady: 3
observedGeneration: 3

Kubernetes cannot access Cassandra within same namespace

I get
All host(s) tried for query failed (tried: 10.244.0.72/10.244.0.72:9042 (com.datastax.driver.core.exceptions.TransportException: [10.244.0.72/10.244.0.72:9042] Channel has been closed))
when trying to access Cassandra within the same namespace. Although when I forward ports it works ok from localhost. keyspace is created successfully.
kubectl port-forward cassandra1-0 9042:9042
My yaml
apiVersion: v1
kind: Service
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
ports:
- name: "cql"
protocol: "TCP"
port: 9042
targetPort: 9042
- name: "thrift"
protocol: "TCP"
port: 9160
targetPort: 9160
selector:
app: cassandra1
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra1
labels:
app: cassandra1
spec:
serviceName: cassandra1
replicas: 1
selector:
matchLabels:
app: cassandra1
template:
metadata:
labels:
app: cassandra1
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra1
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
- containerPort: 9160
name: thrift
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra1-0.cassandra1.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "cassandra1"
- name: CASSANDRA_DC
value: "DC1-cassandra1"
- name: CASSANDRA_RACK
value: "Rack1-cassandra1"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: cassandra1-data
mountPath: /cassandra1_data
volumeClaimTemplates:
- metadata:
name: cassandra1-data
namespace: default
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Cassandra starts with following properties:
Starting Cassandra on 10.244.0.72
CASSANDRA_CONF_DIR /etc/cassandra
CASSANDRA_CFG /etc/cassandra/cassandra.yaml
CASSANDRA_AUTO_BOOTSTRAP true
CASSANDRA_BROADCAST_ADDRESS 10.244.0.72
CASSANDRA_BROADCAST_RPC_ADDRESS 10.244.0.72
CASSANDRA_CLUSTER_NAME cassandra1
CASSANDRA_COMPACTION_THROUGHPUT_MB_PER_SEC
CASSANDRA_CONCURRENT_COMPACTORS
CASSANDRA_CONCURRENT_READS
CASSANDRA_CONCURRENT_WRITES
CASSANDRA_COUNTER_CACHE_SIZE_IN_MB
CASSANDRA_DC DC1-cassandra1
CASSANDRA_DISK_OPTIMIZATION_STRATEGY ssd
CASSANDRA_ENDPOINT_SNITCH SimpleSnitch
CASSANDRA_GC_WARN_THRESHOLD_IN_MS
CASSANDRA_INTERNODE_COMPRESSION
CASSANDRA_KEY_CACHE_SIZE_IN_MB
CASSANDRA_LISTEN_ADDRESS 10.244.0.72
CASSANDRA_LISTEN_INTERFACE
CASSANDRA_MEMTABLE_ALLOCATION_TYPE
CASSANDRA_MEMTABLE_CLEANUP_THRESHOLD
CASSANDRA_MEMTABLE_FLUSH_WRITERS
CASSANDRA_MIGRATION_WAIT 1
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RACK Rack1-cassandra1
CASSANDRA_RING_DELAY 30000
CASSANDRA_RPC_ADDRESS 0.0.0.0
CASSANDRA_RPC_INTERFACE
CASSANDRA_SEEDS cassandra1-0.cassandra1.default.svc.cluster.local
CASSANDRA_SEED_PROVIDER org.apache.cassandra.locator.SimpleSeedProvider
changed ownership of '/cassandra_data/data' from root to cassandra
changed ownership of '/cassandra_data' from root to cassandra
In my application that runs in the same namespace i tried setting cassandraport to 9042 and host to:
10.240.0.4 (hostIP)
10.244.0.72 (podIP)
cassandra1 (name of the service)
cassandra1.default
cassandra1.default.svc.cluster.local
cassandra1-0.cassandra1.default.svc.cluster.local
_cql._tcp.cassandra1.default.svc.cluster.local
I also tried different types of a service:
headless, ClusterIP, NodePort
Does anybody has ANY ideas what is wrong or what else can i try to get this to work?