how to convert nginx-ingress annotations to --set format to enable prometheus metrics - annotations

I want to set annotations on command line while installing nginx-ingress. my values.yaml file looks like below and i want to use command line argument instead of values.yaml file.
controller:
metrics:
port: 10254
enabled: true
service:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
serviceMonitor:
enabled: true
namespace: monitoring
namespaceSelector:
any: true
I tried following arguments but its giving error
--set controller.metrics.service.annotations."prometheus\.io\/scrape"="true" --set controller.metrics.service.annotations."prometheus\.io\/port"="10254"
Error:
Error: release nginx-ingress failed: Service in version "v1" cannot be handled as a Service: v1.Service.ObjectMeta: v1.ObjectMeta.Annotations: ReadString: expects " or n, but found 1, error found in #10 byte of ...|io/port":10254,"prom|..., bigger context ...|,"metadata":{"annotations":{"prometheus.io/port":10254,"prometheus.io/scrape":true},"labels":{"app.k|...
Any suggestions how exactly these annotations should be passed ?

I just had the same issue! When you look at the chart, they define it as a string. So when I utilize the command below it successfully sets the values. The trick is to utilize --set-string rather than --set
helm upgrade ingress-controller ingress-nginx/ingress-nginx --namespace ingress-nginx --set controller.metrics.enabled=true --set-string controller.metrics.service.annotations."prometheus\.io/scrape"="true" --set-string controller.metrics.service.annotations."prometheus\.io/port"="10254"
Showing that the values are set when we validate this with helm get values ingress-controller --namespace ingress-nginx
controller:
metrics:
enabled: true
service:
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
Now if you would like to get the details into prometheus, then it appears that this did not work for me though. I had to utilize the controller.podAnnotations to get this working:
helm upgrade ingress-controller ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--set controller.metrics.enabled=true \
--set-string controller.podAnnotations."prometheus\.io/scrape"="true" \
--set-string controller.podAnnotations."prometheus\.io/port"="10254"
Let me know if this worked for you! :)

Related

letsencrypt kubernetes: How can i include ClusterIssuer in cert-manager using helm chart instead of deploying it as a separate manifest?

I would like to add ssl support to my web app (wordpress) deployed on kubernetes. for that i deployed cert-manager using helm like following:
helm upgrade \
cert-manager \
--namespace cert-manager \
--version v1.9.1 \
--set installCRDs=true \
--set ingressShim.defaultIssuerName=letsencrypt-prod \
--set ingressShim.defaultIssuerKind=ClusterIssuer \
--set ingressShim.defaultIssuerGroup=cert-manager.io \
--create-namespace \
jetstack/cert-manager --install
Then i deployed wordpress using helm as well, while values.yml look like :
#Change default svc type
service:
type: ClusterIP
#ingress resource
ingress:
enabled: true
hostname: app.benighil-mohamed.com
path: /
annotations:
#kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt-prod
extraTls:
- hosts:
- "{{ .Values.ingress.hostname }}" # ie: app.benighil-mohamed.com
secretName: "{{ .Release.Name }}-{{ .Values.ingress.hostname }}" #ie: wp-app.benighil-mohamed.com
However, when i check certifiactes and certificaterequests i got the following:
vscode ➜ /workspaces/flux/ingress $ kubectl get certificate -n app -owide
NAME READY SECRET ISSUER STATUS AGE
wp-benighil.benighil-mohamed.com False wp-benighil.benighil-mohamed.com letsencrypt-prod Issuing certificate as Secret does not exist 25m
vscode ➜ /workspaces/flux/ingress
vscode ➜ /workspaces/flux/ingress $ kubectl get certificaterequests -n app -owide
NAME APPROVED DENIED READY ISSUER REQUESTOR STATUS AGE
wp-benighil.benighil-mohamed.com-45d6s True False letsencrypt-prod system:serviceaccount:cert-manager:cert-manager Referenced "ClusterIssuer" not found: clusterissuer.cert-manager.io "letsencrypt-prod" not found 27m
vscode ➜ /workspaces/flux/ingress
Any idea please ?

Helm cant pull registry image

After helm upgrade i got error:
Failed to pull image "myhostofgitlab.ru/common-core-executor:1bac97ef": rpc error: code = Unknown desc = Error response from daemon: Head https://myhostofgitlab.ruv2/common-core-executor/manifests/1bac97ef: denied: access forbidden
my run command:
k8s-deploy-Prod:
image: alpine/helm:latest
stage: deploy
script:
- helm upgrade ${PREFIX}-common-core-executor k8s/helm/common-core-executor --debug --atomic --install --wait --history-max 3
--set image.repository=${CI_REGISTRY_IMAGE}/common-core-executor
--set image.tag=${CI_COMMIT_SHORT_SHA}
--set name=${PREFIX}-common-core-executor
--set service.name=${PREFIX}-common-core-executor
--set branch=${PREFIX}
--set ingress.enabled=true
--set ingress.hosts[0].host=${PREFIX}.common-core-executor.k8s.test.zone
--set ingress.tls[0].hosts[0]=${PREFIX}.common-core-executor.k8s.test.zone
--set secret.name=${PREFIX}-${PROJECT_NAME}-secret
--timeout 2m0s
-f k8s/helm/common-core-executor/common-core-executor-values.yaml
-n ${NAMESPACE}
Where i wrong?
Before that error i make some steps from officially instruction. Firstable i create cred like this (its just sample data):
apiVersion: v1
kind: Secret
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczovL2hvc3QtZm9yLXN0YWNrLW92ZXJmbG93OnsidXNlcm5hbWUiOiJzdGFja292ZXJmbG93IiwicGFzc3dvcmQiOiJzdGFja292ZXJmbG93IiwiYXV0aCI6Inh4eCJ9fX0=
metadata:
name: regcred
namespace: prod-common-service
type: kubernetes.io/dockerconfigjson
And add in containers section of deployment.yaml
imagePullSecrets:
- name: regcred
Thanks!

How to add '- {}' value with helm --set parameter?

I am stuck with the following issue. I am trying to implement kubernetes networkpolices via values provided to helm.
values.yml
...
networkpolicy: []
# Allows all ingress and egress
# - name: my-app
# podSelector:
# matchLabels:
# app: my-app
# egress:
# - {}
# ingress:
# - {}
...
Running install command:
helm --debug --v 3 --kubeconfig $kubeconf upgrade --install $name \
$helmchart \
--set networkpolicy[0].name="my-app" \
--set networkpolicy[0].podSelectory.matchLabels.app="my-app" \
--set networkpolicy[0].egress[0]="''{}''" \
Error message:
...
helm.go:84: [debug] error validating "": error validating data: ValidationError(NetworkPolicy.spec.egress[0]): invalid type for io.k8s.api.networking.v1.NetworkPolicyEgressRule: got "string", expected "map"
...
How can I set the "- {}" with --set networkpolicy[0].egress[0] ... ???
Thanks.

How to create only internal load balancer with ingress-nginx chart?

In my AWS EKS, I have installed nginx-ingress with following command:
helm upgrade --install -f controller.yaml \
--namespace nginx-ingress \
--create-namespace \
--version 3.26.0 \
nginx-ingress ingress-nginx/ingress-nginx
Where controller.yaml file looks like this:
controller:
ingressClass: nginx-internal
service:
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
I have few applications, and individual ingresses per application with different virtual hosts and I want all ingress objects point to internal load balancer,
Even if I set ingressClass in ingresses of applications, It seems they point to Public Load balancer:
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx-internal
So, is there a way to create only single internal load balancer with its ingresses pointing to that load balancer ?
Thanks
I managed to get this working by using the following controller.yaml
controller:
ingressClassByName: true
ingressClassResource:
name: nginx-internal
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx-internal"
service:
# Disable the external LB
external:
enabled: false
# Enable the internal LB. The annotations are important here, without
# these you will get a "classic" loadbalancer
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Then you can use the ingressClassName as follows
kind: Ingress
spec:
ingressClassName: nginx-internal
It's not necessary but I deployed this to a namespace that reflected the internal only ingress
helm upgrade --install \
--create-namespace ingress-nginx-internal ingress-nginx/ingress-nginx \
--namespace ingress-nginx-internal \
-f controller.yaml
Noticed in your controller.yaml that you enabled internal setup. According to documentation, this setup creates two load balancers, an external and an internal, in case you want to expose some applications to internet and others only inside your vpc in same k8s cluster.
If you want just one internal load balancer, try to setup you controller.yaml like this:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxxxx,subnet-yyyyy,subnet-zzzzz"
It will provision just one NBL that routes the traffic internally.
Using service.beta.kubernetes.io/aws-load-balancer-subnets annotation, you can choose which Availability Zones / Subnets your load balancer will routes traffic to.
If you remove service.beta.kubernetes.io/aws-load-balancer-type annotation, a Classic Load Balancer will be provisioned instead of Network.
Based on #rmakoto answer, it seems some configs are missing, in order to tell AWS to create an internal NLB. I've tried with the following configs, and now it works like expected:
controller:
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-name: "k8s-nlb"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-subnets: "subnet-xxx,subnet-yyy,subnet-zzz"
Now to deploy run the following command:
helm upgrade --install \
--create-namespace ingress-nginx nginx-stable/nginx-ingress \
--namespace ingress-nginx \
-f controller.yaml
If you only want classic ELBs. This worked for me.
controller:
service:
external:
enabled: false
internal:
enabled: true
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"

Helm [stable/nginx-ingress] Getting issue while passing headers

Version of Helm and Kubernetes: Client: &version.Version{SemVer:"v2.14.1" and 1.13.7-gke.24
Which chart: stable/nginx-ingress [v0.24.1]
What happened: Trying to override headers using--set-string but it does not work as expected. It always gives issues with the parsing
/usr/sbin/helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' . Error: release cx-nginx-1 failed: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadMapCB: expect { or n, but found [, error found in #10 byte of ...|","data":["\"X-Diffe|..., bigger context ...|{"apiVersion":"v1","data":["\"X-Different-Name\":\"true\"","\"X-Request-Start|...
What you expected to happen: I want to override the header which the there by default in values.yam with custom headers
How to reproduce it (as minimally and precisely as possible):
I have provided the comment to reproduce,
helm install --name cx-nginx-1 --set controller.name=cx-nginx-1 --set controller.kind=Deployment --set controller.service.loadBalancerIP= --set controller.metrics.enabled=true --set-string 'controller.headers={"X-Different-Name":"true","X-Request-Start":"test-header","X-Using-Nginx-Controller":"true"}' .
I tried to run in debug mode (--dry-run --debug), It shows me configmap like below,
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1
component: "cx-nginx-1"
heritage: Tiller
release: foiled-coral
name: foiled-coral-nginx-ingress-custom-headers
namespace: cx-ingress
data:
- X-Different-Name:true
- X-Request-Start:test-header
- X-Using-Nginx-Controller:true
It seems like its adding intent 4 instead of intent 2. Below warning also i'm getting,
Warning: Merging destination map for chart 'nginx-ingress'. Cannot overwrite table item 'headers', with non table value: map[X-Different-Name:true X-Request-Start:test-header X-Using-Nginx-Controller:true]
Kindly help me to pass the headers in the right way.
Note: controller.headers is deprecated, make sure to use the controller.proxySetHeaders instead.
Helm --set has some limitations.
Your best option is to avoid using the --set, and use the --values instead.
You can declare all your custom values in a file like this:
# values.yaml
controller:
name: "cx-nginx-1"
kind: "Deployment"
service:
loadBalancerIP: ""
metrics:
enable: true
proxySetHeaders:
X-Different-Name: "true"
X-Request-Start: "true"
X-Using-Nginx-Controller: "true"
Then use it on install:
helm install --name cx-nginx-1 stable/nginx-ingress \
--values=values.yaml
If you want to use --set anyway, you should use this notation:
helm install --name cx-nginx-1 stable/nginx-ingress \
--set controller.name=cx-nginx-1 \
--set controller.kind=Deployment \
--set controller.service.loadBalancerIP= \
--set controller.metrics.enabled=true \
--set-string controller.proxySetHeaders.X-Different-Name="true" \
--set-string controller.proxySetHeaders.X-Request-Start="true" \
--set-string controller.proxySetHeaders.X-Using-Nginx-Controller="true"