How to create only internal load balancer with ingress-nginx chart? - kubernetes-helm

In my AWS EKS, I have installed nginx-ingress with following command:
helm upgrade --install -f controller.yaml \
--namespace nginx-ingress \
--create-namespace \
--version 3.26.0 \
nginx-ingress ingress-nginx/ingress-nginx
Where controller.yaml file looks like this:
controller:
ingressClass: nginx-internal
service:
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
I have few applications, and individual ingresses per application with different virtual hosts and I want all ingress objects point to internal load balancer,
Even if I set ingressClass in ingresses of applications, It seems they point to Public Load balancer:
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-internal
So, is there a way to create only single internal load balancer with its ingresses pointing to that load balancer ?
Thanks

I managed to get this working by using the following controller.yaml
controller:
ingressClassByName: true
ingressClassResource:
name: nginx-internal
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx-internal"
service:
# Disable the external LB
external:
enabled: false
# Enable the internal LB. The annotations are important here, without
# these you will get a "classic" loadbalancer
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Then you can use the ingressClassName as follows
kind: Ingress
spec:
ingressClassName: nginx-internal
It's not necessary but I deployed this to a namespace that reflected the internal only ingress
helm upgrade --install \
--create-namespace ingress-nginx-internal ingress-nginx/ingress-nginx \
--namespace ingress-nginx-internal \
-f controller.yaml

Noticed in your controller.yaml that you enabled internal setup. According to documentation, this setup creates two load balancers, an external and an internal, in case you want to expose some applications to internet and others only inside your vpc in same k8s cluster.
If you want just one internal load balancer, try to setup you controller.yaml like this:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxxxx,subnet-yyyyy,subnet-zzzzz"
It will provision just one NBL that routes the traffic internally.
Using service.beta.kubernetes.io/aws-load-balancer-subnets annotation, you can choose which Availability Zones / Subnets your load balancer will routes traffic to.
If you remove service.beta.kubernetes.io/aws-load-balancer-type annotation, a Classic Load Balancer will be provisioned instead of Network.

Based on #rmakoto answer, it seems some configs are missing, in order to tell AWS to create an internal NLB. I've tried with the following configs, and now it works like expected:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: "k8s-nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxx,subnet-yyy,subnet-zzz"
Now to deploy run the following command:
helm upgrade --install \
--create-namespace ingress-nginx nginx-stable/nginx-ingress \
--namespace ingress-nginx \
-f controller.yaml

If you only want classic ELBs. This worked for me.
controller:
service:
external:
enabled: false
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Related

Kubernetes Nginx ingress appears to remove headers before sending requests to backend service

I am trying to deploy a SpringbBoot Java application hosted on Apache Tomcat on a Kubernetes cluster using Ngninx Ingress for URL routing. More specifically, I am deploying on Minikube on my local machine, exposing the SpringBoot application as a Cluster IP Service, and executing the
minikube tunnel
command to expose services to my local machine. Visually, the process is the following...
Browser -> Ingress -> Service -> Pod hosting docker container of Apache Tomcat Server housing SpringBoot Java API
The backend service requires a header "SM_USER", which for the moment can be any value. When running the backend application as a Docker container with port forwarding, accessing the backend API works great. However, when deploying to a Kubernetes cluster behind an Nginx ingress, I get 403 errors from the API stating that the SM_USER header is missing. I suspect the following. My guess is that the header is included with the request to the ingress, but removed when being routed to the backend service.
My setup is the following.
Deploying on Minikube
minikube start
minikube addons enable ingress
eval $(minikube docker-env)
docker build . -t api -f Extras/DockerfileAPI
docker build . -t ui -f Extras/DockerfileUI
kubectl apply -f Extras/deployments-api.yml
kubectl apply -f Extras/deployments-ui.yml
kubectl expose deployment ui --type=ClusterIP --port=8080
kubectl expose deployment api --type=ClusterIP --port=8080
kubectl apply -f Extras/ingress.yml
kubectl apply -f Extras/ingress-config.yml
edit /etc/hosts file to resolve mydomain.com to localhost
minikube tunnel
Ingress YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# Tried forcing the header by manual addition, no luck (tried with and without)
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header SM_USER $http_Admin;
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /ui
pathType: Prefix
backend:
service:
name: ui
port:
number: 8080
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 8080
Ingress Config YAML
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: default
labels:
app.kubernetes.io/name: my-ingress
app.kubernetes.io/part-of: my-ingress
data:
# Tried allowing underscores and also using proxy protocol (tried with and without)
enable-underscores-in-headers: "true"
use-proxy-protocol: "true"
When navigating to mydomain.com/api, I expect to receive the ROOT API interface but instead receive the 403 error page indicating that the SM_USER is missing. Note, this is not a 403 forbidden error regarding the ingress or access from outside the cluster, the specific SpringBoot error page I receive is from within my application and custom to indicate that the header is missing. In other words, my routing is definitely correct, and I am able to access the API, its just that the header is missing.
Are there configs or parameters I am missing? Possibly an annotation?
This is resolved. The issue was that the config was being applied in the Application namespace. Note, even if the Ingress object is in the application namespace, if you are using minikubes built in ingress functionality, the ConfigMap must be applied in the Nginx ingress namespace.

Nginx-Ingress Helm Deployment --tcp-services-configmap Argument not found

I'm trying to do TCP/UDP port-forwarding with an ingress.
Following the docs: https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/
It says to set: --tcp-services-configmap but doesn't tell you where to set it. I assume it is command line arguments. I then googled the list of command line arguments for nginx-ingress
https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/
Here you can clearly see its an argument of the controller:
--tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
First Question: how do I dynamically add to the container arguments of the nginx-ingress helm chart I don't see that documented anywhere?
Second Question: What is the proper way to set this with the current version of nginx-ingress because setting the command line argument fails the container startup because the binary doesn't have that argument option.
Here in the default helm chart values.yaml there are some options about setting the namespace for the configmap for tcp-services but given the docs say I have to set it as an argument but that argument fails the startup I'm not sure how you actually set this.
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml
I manually edited the deployment and set the flag on the container args:
- args:
- -nginx-plus=false
- -nginx-reload-timeout=60000
- -enable-app-protect=false
- -nginx-configmaps=$(POD_NAMESPACE)/emoney-nginx-controller-nginx-ingress
- -default-server-tls-secret=$(POD_NAMESPACE)/emoney-nginx-controller-nginx-ingress-default-server-tls
- -ingress-class=emoney-ingress
- -health-status=false
- -health-status-uri=/nginx-health
- -tcp-services-configmap=emoney-node/tcp-services-configmap
- -nginx-debug=false
- -v=1
- -nginx-status=true
- -nginx-status-port=8080
- -nginx-status-allow-cidrs=127.0.0.1
- -report-ingress-status
- -external-service=emoney-nginx-controller-nginx-ingress
- -enable-leader-election=true
- -leader-election-lock-name=emoney-nginx-controller-nginx-ingress-leader-election
- -enable-prometheus-metrics=true
- -prometheus-metrics-listen-port=9113
- -prometheus-tls-secret=
- -enable-custom-resources=true
- -enable-tls-passthrough=false
- -enable-snippets=false
- -enable-preview-policies=false
- -ready-status=true
- -ready-status-port=8081
- -enable-latency-metrics=false
env:
When I set this like the docs say should be possible the pod fails to start up because it errors out saying that argument isn't an option of the binary.
kubectl logs emoney-nginx-controller-nginx-ingress-5769565cc7-vmgrf -n emoney-node
flag provided but not defined: -tcp-services-configmap
Usage of /nginx-ingress:
-alsologtostderr
log to standard error as well as files
-default-server-tls-secret string
A Secret with a TLS certificate and key for TLS termination of the default server. Format: <namespace>/<name>.
If not set, than the certificate and key in the file "/etc/nginx/secrets/default" are used.
If "/etc/nginx/secrets/default" doesn't exist, the Ingress Controller will configure NGINX to reject TLS connections to the default server.
If a secret is set, but the Ingress controller is not able to fetch it from Kubernetes API or it is not set and the Ingress Controller
fails to read the file "/etc/nginx/secrets/default", the Ingress controller will fail to start.
-enable-app-protect
Enable support for NGINX App Protect. Requires -nginx-plus.
-enable-custom-resources
Enable custom resources (default true)
-enable-internal-routes
Enable support for internal routes with NGINX Service Mesh. Requires -spire-agent-address and -nginx-plus. Is for use with NGINX Service Mesh only.
-enable-latency-metrics
Enable collection of latency metrics for upstreams. Requires -enable-prometheus-metrics
-enable-leader-election
Enable Leader election to avoid multiple replicas of the controller reporting the status of Ingress, VirtualServer and VirtualServerRoute resources -- only one replica will report status (default true). See -report-ingress-status flag. (default true)
-enable-preview-policies
Enable preview policies
-enable-prometheus-metrics
Enable exposing NGINX or NGINX Plus metrics in the Prometheus format
-enable-snippets
Enable custom NGINX configuration snippets in Ingress, VirtualServer, VirtualServerRoute and TransportServer resources.
-enable-tls-passthrough
Enable TLS Passthrough on port 443. Requires -enable-custom-resources
-external-service string
Specifies the name of the service with the type LoadBalancer through which the Ingress controller pods are exposed externally.
The external address of the service is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. For Ingress resources only: Requires -report-ingress-status.
-global-configuration string
The namespace/name of the GlobalConfiguration resource for global configuration of the Ingress Controller. Requires -enable-custom-resources. Format: <namespace>/<name>
-health-status
Add a location based on the value of health-status-uri to the default server. The location responds with the 200 status code for any request.
Useful for external health-checking of the Ingress controller
-health-status-uri string
Sets the URI of health status location in the default server. Requires -health-status (default "/nginx-health")
-ingress-class string
A class of the Ingress controller.
An IngressClass resource with the name equal to the class must be deployed. Otherwise, the Ingress Controller will fail to start.
The Ingress controller only processes resources that belong to its class - i.e. have the "ingressClassName" field resource equal to the class.
The Ingress Controller processes all the VirtualServer/VirtualServerRoute/TransportServer resources that do not have the "ingressClassName" field for all versions of kubernetes. (default "nginx")
-ingress-template-path string
Path to the ingress NGINX configuration template for an ingress resource.
(default for NGINX "nginx.ingress.tmpl"; default for NGINX Plus "nginx-plus.ingress.tmpl")
-ingresslink string
Specifies the name of the IngressLink resource, which exposes the Ingress Controller pods via a BIG-IP system.
The IP of the BIG-IP system is used when reporting the status of Ingress, VirtualServer and VirtualServerRoute resources. For Ingress resources only: Requires -report-ingress-status.
-leader-election-lock-name string
Specifies the name of the ConfigMap, within the same namespace as the controller, used as the lock for leader election. Requires -enable-leader-election. (default "nginx-ingress-leader-election")
-log_backtrace_at value
when logging hits line file:N, emit a stack trace
-log_dir string
If non-empty, write log files in this directory
-logtostderr
log to standard error instead of files
-main-template-path string
Path to the main NGINX configuration template. (default for NGINX "nginx.tmpl"; default for NGINX Plus "nginx-plus.tmpl")
-nginx-configmaps string
A ConfigMap resource for customizing NGINX configuration. If a ConfigMap is set,
but the Ingress controller is not able to fetch it from Kubernetes API, the Ingress controller will fail to start.
Format: <namespace>/<name>
-nginx-debug
Enable debugging for NGINX. Uses the nginx-debug binary. Requires 'error-log-level: debug' in the ConfigMap.
-nginx-plus
Enable support for NGINX Plus
-nginx-reload-timeout int
The timeout in milliseconds which the Ingress Controller will wait for a successful NGINX reload after a change or at the initial start. (default 60000) (default 60000)
-nginx-status
Enable the NGINX stub_status, or the NGINX Plus API. (default true)
-nginx-status-allow-cidrs string
Add IPv4 IP/CIDR blocks to the allow list for NGINX stub_status or the NGINX Plus API. Separate multiple IP/CIDR by commas. (default "127.0.0.1")
-nginx-status-port int
Set the port where the NGINX stub_status or the NGINX Plus API is exposed. [1024 - 65535] (default 8080)
-prometheus-metrics-listen-port int
Set the port where the Prometheus metrics are exposed. [1024 - 65535] (default 9113)
-prometheus-tls-secret string
A Secret with a TLS certificate and key for TLS termination of the prometheus endpoint.
-proxy string
Use a proxy server to connect to Kubernetes API started by "kubectl proxy" command. For testing purposes only.
The Ingress controller does not start NGINX and does not write any generated NGINX configuration files to disk
-ready-status
Enables the readiness endpoint '/nginx-ready'. The endpoint returns a success code when NGINX has loaded all the config after the startup (default true)
-ready-status-port int
Set the port where the readiness endpoint is exposed. [1024 - 65535] (default 8081)
-report-ingress-status
Updates the address field in the status of Ingress resources. Requires the -external-service or -ingresslink flag, or the 'external-status-address' key in the ConfigMap.
-spire-agent-address string
Specifies the address of the running Spire agent. Requires -nginx-plus and is for use with NGINX Service Mesh only. If the flag is set,
but the Ingress Controller is not able to connect with the Spire Agent, the Ingress Controller will fail to start.
-stderrthreshold value
logs at or above this threshold go to stderr
-transportserver-template-path string
Path to the TransportServer NGINX configuration template for a TransportServer resource.
(default for NGINX "nginx.transportserver.tmpl"; default for NGINX Plus "nginx-plus.transportserver.tmpl")
-v value
log level for V logs
-version
Print the version, git-commit hash and build date and exit
-virtualserver-template-path string
Path to the VirtualServer NGINX configuration template for a VirtualServer resource.
(default for NGINX "nginx.virtualserver.tmpl"; default for NGINX Plus "nginx-plus.virtualserver.tmpl")
-vmodule value
comma-separated list of pattern=N settings for file-filtered logging
-watch-namespace string
Namespace to watch for Ingress resources. By default the Ingress controller watches all namespaces
-wildcard-tls-secret string
A Secret with a TLS certificate and key for TLS termination of every Ingress host for which TLS termination is enabled but the Secret is not specified.
Format: <namespace>/<name>. If the argument is not set, for such Ingress hosts NGINX will break any attempt to establish a TLS connection.
If the argument is set, but the Ingress controller is not able to fetch the Secret from Kubernetes API, the Ingress controller will fail to start.
Config Map
apiVersion: v1
data:
"1317": emoney-node/emoney-api:1317
"9090": emoney-node/emoney-grpc:9090
"26656": emoney-node/emoney:26656
"26657": emoney-node/emoney-rpc:26657
kind: ConfigMap
metadata:
annotations:
meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
creationTimestamp: "2021-11-01T18:06:49Z"
labels:
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:1317: {}
f:9090: {}
f:26656: {}
f:26657: {}
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/managed-by: {}
manager: helm
operation: Update
time: "2021-11-01T18:06:49Z"
name: tcp-services-configmap
namespace: emoney-node
resourceVersion: "2056146"
selfLink: /api/v1/namespaces/emoney-node/configmaps/tcp-services-configmap
uid: 188f5dc8-02f9-4ee5-a5e3-819d00ff8b67
Name: emoney
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.30.240
Port: p2p 26656/TCP
TargetPort: 26656/TCP
Endpoints: 10.0.36.192:26656
Session Affinity: None
Events: <none>
Name: emoney-api
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.166.97
Port: api 1317/TCP
TargetPort: 1317/TCP
Endpoints: 10.0.36.192:1317
Session Affinity: None
Events: <none>
Name: emoney-grpc
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.136.177
Port: grpc 9090/TCP
TargetPort: 9090/TCP
Endpoints: 10.0.36.192:9090
Session Affinity: None
Events: <none>
Name: emoney-nginx-controller-nginx-ingress
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney-nginx-controller
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=emoney-nginx-controller-nginx-ingress
helm.sh/chart=nginx-ingress-0.11.3
Annotations: meta.helm.sh/release-name: emoney-nginx-controller
meta.helm.sh/release-namespace: emoney-node
Selector: app=emoney-nginx-controller-nginx-ingress
Type: LoadBalancer
IP: 172.20.16.202
LoadBalancer Ingress: lb removed
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32250/TCP
Endpoints: 10.0.43.32:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32375/TCP
Endpoints: 10.0.43.32:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30904
Events: <none>
Name: emoney-rpc
Namespace: emoney-node
Labels: app.kubernetes.io/instance=emoney
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ibcnode
app.kubernetes.io/version=1.16.0
helm.sh/chart=ibcnode-0.1.0
Annotations: meta.helm.sh/release-name: emoney
meta.helm.sh/release-namespace: emoney-node
Selector: app.kubernetes.io/instance=emoney,app.kubernetes.io/name=ibcnode
Type: ClusterIP
IP: 172.20.42.163
Port: rpc 26657/TCP
TargetPort: 26657/TCP
Endpoints: 10.0.36.192:26657
Session Affinity: None
Events: <none>
helm repo add nginx-stable https://helm.nginx.com/stable --kubeconfig=./kubeconfig || echo "helm repo already added"
helm repo update --kubeconfig=./kubeconfig || echo "helm repo already updated"
helm upgrade ${app_name}-nginx-controller -n ${app_namespace} nginx-stable/nginx-ingress \
--install \
--kubeconfig=./kubeconfig \
--create-namespace \
--set controller.service.type=LoadBalancer \
--set controller.tcp.configMapNamespace=${app_namespace} \
--set controller.ingressClass="${app_name}-ingress"
kubectl rollout status -w deployment/${app_name} --kubeconfig=./kubeconfig -n ${app_namespace}
#- --tcp-services-configmap=emoney-node/tcp-services-configmap
You could say the helm chart is biased in that it doesn't expose the option to set those args as chart value. It will set them by itself based on conditional logic when it's required according to the values.
When I check the nginx template in the repo, I see that additional args are passed from the template in the params helper file. Those seem to be generated dynamically. I.E.
{{- if .Values.tcp }}
- --tcp-services-configmap={{ default "$(POD_NAMESPACE)" .Values.controller.tcp.configMapNamespace }}/{{ include "ingress-nginx.fullname" . }}-tcp
{{- end }}
So, it seems it will use this flag only if the tcp value isn't empty. On the same condition, it will create the configmap.
Further, the tcp value allows you to set a key configMapNamespace. So if you were to set this key only, then the flag would be used as per paramaters helpers. Now you need to create your configmap only in the provided namespace and let it match the name {{ include "ingress-nginx.fullname" . }}-tcp.
So you could create the configmap in the default namespace and name it ingress-nginx-tcp or similar, depending on how you set the release name.
kubectl create configmap ingress-nginx-tcp --from-literal 1883=mqtt/emqx:1883 -n default
helm install --set controller.tcp.configMapNamespace=default ingress-nginx ingress-nginx/ingress-nginx
I think the only problem with that is that you cannot create it in the .Release.Namespace, since when tcp isn't empty it will attempt to create a configmap there by itself, which would result in conflicts. At least that's how I interpret the templates in the chart repo.
I personally, have configured TCP via values file that I pass to helm with -f.
helm install -f values.yaml ingress-nginx ingress-nginx/ingress-nginx
# configure the tcp configmap
tcp:
1883: mqtt/emqx:1883
8883: mqtt/emqx:8883
# enable the service and expose the tcp ports.
# be careful as this will pontentially make them
# availble on the public web
controller:
service:
enabled: true
ports:
http: 80
https: 443
mqtt: 1883
mqttssl: 8883
targetPorts:
http: http
https: https
mqtt: mqtt
mqttssl: mqttssl

Azure Kubernetes - prometheus is deployed as a part of ISTIO not showing the deployments?

I have used the following configuration to setup the Istio
cat << EOF | kubectl apply -f -
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
# Use the default profile as the base
# More details at: https://istio.io/docs/setup/additional-setup/config-profiles/
profile: default
# Enable the addons that we will want to use
addonComponents:
grafana:
enabled: true
prometheus:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
values:
global:
# Ensure that the Istio pods are only scheduled to run on Linux nodes
defaultNodeSelector:
beta.kubernetes.io/os: linux
kiali:
dashboard:
auth:
strategy: anonymous
components:
egressGateways:
- name: istio-egressgateway
enabled: true
EOF
and exposed the prometheus service as mentioned below
kubectl expose service prometheus --type=LoadBalancer --name=prometheus-svc --namespace istio-system
kubectl get svc prometheus-svc -n istio-system -o json
export PROMETHEUS_URL=$(kubectl get svc prometheus-svc -n istio-system -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}"):$(kubectl get svc prometheus-svc -n istio-system -o 'jsonpath={.spec.ports[0].port}')
echo http://${PROMETHEUS_URL}
curl http://${PROMETHEUS_URL}
I have deployed an application however couldn't see the below deployments in prometheus
The standard prometheus installation by istio does not configure your pods to send metrics to prometheus. It just collects data from the istio resouces.
To add your pods to being scraped add the following annotations in the deployment.yml of your application:
apiVersion: apps/v1
kind: Deployment
[...]
spec:
template:
metadata:
annotations:
prometheus.io/scrape: true # determines if a pod should be scraped. Set to true to enable scraping.
prometheus.io/path: /metrics # determines the path to scrape metrics at. Defaults to /metrics.
prometheus.io/port: 80 # determines the port to scrape metrics at. Defaults to 80.
[...]
By the way: The prometheus instance installed with istioctl should not be used for production. From docs:
[...] pass --set values.prometheus.enabled=true during installation. This built-in deployment of Prometheus is intended for new users to help them quickly getting started. However, it does not offer advanced customization, like persistence or authentication and as such should not be considered production ready.
You should setup your own prometheus and configure istio to report to it. See:
Reference: https://istio.io/latest/docs/ops/integrations/prometheus/#option-1-metrics-merging
The following yaml provided by istio can be used as reference for setup of prometheus:
https://raw.githubusercontent.com/istio/istio/release-1.7/samples/addons/prometheus.yaml
Furthermore, if I remember correctly, installation of addons like kiali, prometheus, ... with istioctl will be removed with istio 1.8 (release date december 2020). So you might want to setup your own instances with helm anyway.

Nginx Ingress returns 502 Bad Gateway on Kubernetes

I have a Kubernetes cluster deployed on AWS (EKS). I deployed the cluster using the “eksctl” command line tool. I’m trying to deploy a Dash python app on the cluster without success. The default port for Dash is 8050. For the deployment I used the following resources:
pod
service (ClusterIP type)
ingress
You can check the resource configuration files below:
pod-configuration-file.yml
kind: Pod
apiVersion: v1
metadata:
name: dashboard-app
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: my_image_from_ecr
ports:
- containerPort: 8050
service-configuration-file.yml
kind: Service
apiVersion: v1
metadata:
name: dashboard-service
spec:
selector:
app: dashboard
ports:
- port: 8050 # exposed port
targetPort: 8050
ingress-configuration-file.yml (host based routing)
kind: Ingress
metadata:
name: dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.my_domain
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: 8050
path: /
I followed the steps below:
kubectl apply -f pod-configuration-file.yml
kubectl apply -f service-configuration-file.yml
kubectl apply -f ingress-confguration-file.yml
I also noticed that the pod deployment works as expected:
kubectl logs my_pod:
and the output is:
Dash is running on http://127.0.0.1:8050/
Warning: This is a development server. Do not use app.run_server
in production, use a production WSGI server like gunicorn instead.
* Serving Flask app "annotation_analysis" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
You can see from the ingress configuration file that I want to do host based routing using my domain. For this to work, I have also deployed an nginx-ingress. I have also created an “A” record set using Route53
that maps the “dashboard.my_domain” to the nginx-ingress:
kubectl get ingress
and the output is:
NAME HOSTS ADDRESS. PORTS. AGE
dashboard-ingress dashboard.my_domain nginx-ingress.elb.aws-region.amazonaws.com 80 93s
Moreover,
kubectl describe ingress dashboard-ingress
and the output is:
Name: dashboard-ingress
Namespace: default
Address: nginx-ingress.elb.aws-region.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host.my-domain
/ dashboard-service:8050 (192.168.36.42:8050)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Unfortunately, when I try to access the Dash app on the browser, I get a
502 Bad Gateway error from the nginx. Could you please help me because my Kubernetes knowledge is limited.
Thanks in advance.
It had nothing to do with Kubernetes or AWS settings. I had to change my python Dash code from:
if __name__ == "__main__":
app.run_server(debug=True)
to:
if __name__ == "__main__":
app.run_server(host='0.0.0.0',debug=True).
The addition of host='0.0.0.0' did the trick!
I think you'll need to check whether any other service is exposed at path / on the same host.
Secondly, try removing rewrite-target annotation. Also can you please update your question with output of kubectl describe ingress <ingress_Name>
I would also suggest you to use backend-protocol annotation with value as HTTP (default value, you can avoid using this if dashboard application is not SSL Configured, and only this application is served at the said host.) But, you may need to add this if multiple applications are served at this host, and create one Ingress with backend-protocol: HTTP for non SSL services, and another with backend-protocol: HTTPS to serve traffic to SSL enabled services.
For more information on backend-protocol annotation, kindly refer this link.
I have often faced this issue in my Ingress Setup and these steps have helped me resolve it.

Expose port 80 on Digital Ocean's managed Kubernetes without a load balancer

I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this?
This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low.
You can deploy an Ingress configured to use the host network and port 80/443.
DO's firewall for your cluster doesn't have 80/443 inbound open by default.
If you edit the auto-created firewall the rules will eventually reset themselves. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:
$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:CLUSTER_UUID \
--name=k8s-extra-mycluster
(Get the CLUSTER_UUID value from the dashboard or the ID column from doctl kubernetes cluster list)
Create the nginx ingress using the host network. I've included the helm chart config below, but you could do it via the direct install process too.
EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(as per the new docs) is :
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
After this repo is added & updated
# For Helm 2
$ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml
# For Helm 3
$ helm install myingress stable/nginx-ingress -f myingress.values.yml
#EDIT: The New way to install in helm 3
helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml
myingress.values.yml for the chart:
---
controller:
kind: DaemonSet
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
daemonset:
useHostPort: true
service:
type: ClusterIP
rbac:
create: true
you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.
since node IPs can & do change, look at deploying external-dns to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):
# For Helm 2
$ helm install --name=mydns -f mydns.values.yml stable/external-dns
# For Helm 3
$ helm install mydns stable/external-dns -f mydns.values.yml
mydns.values.yml for the chart:
---
provider: digitalocean
digitalocean:
# create the API token at https://cloud.digitalocean.com/account/api/tokens
# needs read + write
apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
# domains you want external-dns to be able to edit
- example.com
rbac:
create: true
create a Kubernetes Ingress resource to route requests to an existing Kubernetes service:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: testing123-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: testing123.example.com # the domain you want associated
http:
paths:
- path: /
backend:
serviceName: testing123-service # existing service
servicePort: 8000 # existing service port
after a minute or so you should see the DNS records appear and be resolvable:
$ dig testing123.example.com # should return worker IP address
$ curl -v http://testing123.example.com # should send the request through the Ingress to your backend service
(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).
A NodePort Service can do what you want. Something like this:
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
- protocol: TCP
nodePort: 80
targetPort: 80
This will redirect incoming traffic from port 80 of the node to port 80 of your pod. Publish the node IP in DNS and you're set.
In general exposing a service to the outside world like this is a very, very bad idea, because the single node passing through all traffic to the service is both going to receive unbalanced load and be a single point of failure. That consideration doesn't apply to a single-node cluster, though, so with the caveat that LoadBalancer and Ingress are the fault-tolerant ways to do what you're looking for, NodePort is best for this extremely specific case.