Helm - overwrite specific values from values.yaml - kubernetes-helm

I'm using Helm and for nginx-ingress I need to add some annotations.
Inside the chart's values.yaml file, the podAnnotations is empty ({}).
The question I have is: what is the right way to add these annotations?
The annotations are a child of controller which is the root element of the values.yaml
controller:
...
podAnnotations:
...
Now, I get the feeling that I have to copy the whole values.yaml file into my custom-values.yaml in which I added the annotations.
$> heml install -f ./custom-values.yaml stable/nginx-ingress
But if I copy the whole values file I get the feeling that I can run into trouble if stable/nginx-ingress changes values inside values.yaml over time

You do not have to copy all values as you can use your own and override only the values you need
The values.yaml file is also important to templates. This file contains the default values for a chart. These values may be overridden by users during helm install or helm upgrade
See https://helm.sh/docs/chart_template_guide/
Thus just add the annotations to custom-values.yaml - plus other default values you want to change - and then run
helm install -f ./custom-values.yaml stable/nginx-ingress
Here an example of my custom-values.yaml
controller:
service:
annotations:
field.cattle.io/projectId: c-xxxxx:p-xxxxx
and the important part of the result:
...
# Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/projectId: "c-xxxxx:p-xxxxx"
labels:
app: nginx-ingress
chart: nginx-ingress-1.20.0
component: "controller"
heritage: Tiller
release: release-name
name: release-name-nginx-ingress-controller
...

Related

ArgoCD helm chart how to override values yml in declarative helm chart deployment App/controller

I have an yaml which gets deployed by the ArgoCd controller, that deploys a helm chart from artifactory.
For my local development I use a sperate values.yml in the helm chart.
My controller looks like below refer git link
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: <name-to-the-app>
namespace: argocd
spec:
project: default
source:
repoURL: https://harbor.1000kit.org/chartrepo/1000kit/
targetRevision: <version-hardcode-in-repo>
chart: <chart-name-that-is-getting-deployed>
helm:
releaseName: <release-name-hardcoded>
# custom values to override the helm chart one
values: |
<pass-the-custom-values>>
destination:
server: https://kubernetes.default.svc
namespace: <namespace-where-to-be-deployed>
syncPolicy:
automated:
prune: true
selfHeal: true
The helm chart that is getting deployed is containing the values.yaml
I am trying to override the values.yml present in the helm chart in artifcatory, so passing all the values in part of the source -> helm -> values like above.
Question:
In the custom values, I skipped some value but the ArgoCd is fetching those values from the helm chart value.yml and using it. Is this the behavior?
Another observation is that, The helm chart repo values.yaml is being loaded as parmater in the ArgoCD, and the argocd.io application yaml the values are displayed in the UI.
From the documents i see there are parameters, which can be overridden but the values can't be overridden.
spec:
source:
helm:
parameters:
- name: app
value: $ARGOCD_APP_NAME
Is there any option to explicitly tell ArgoCD to ignore the values.yml from the helm chart in artifactory.
I am new to ArgoCd
Looks like,
The helm chart in the artifactory has a key : value combination say
# value present in the helm chart values.yaml
app-details:
name: "demoapp"
version: "1.0"
description: "simple demo app"
In argoCd application manifest
# .....
helm:
releaseName: <release-name-hardcoded>
# custom values to override the helm chart one
values: |
app-details :
name: "demoapp"
version: "1.0"
# if I didn't specify the description here <--------------- *
# ....
* - ArgoCd is setting the default value (from values.yaml) for description key in this case. This description value can be seen in the deployed manifest.
Also the values.yml data are displayed as parameter in the UI.
I had to use a different approach, where I need to make changes in the template charts that uses this description value.

helm not creating the resources

I have tried to run Helm for the first time. I am having deployment.yaml, service.yaml and ingress.yaml files alongwith values.yaml and chart.yaml.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc
namespace: xyz
labels:
app: abc
app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
replicas: 3
template:
spec:
containers:
- name: abc
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
-
containerPort: 8080
service.yaml
apiVersion: v1
kind: Service
metadata:
name: abc
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
namespace: xyz
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.service.sslCert }}
spec:
ports:
- name: https
protocol: TCP
port: 443
targetPort: 8080
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
selector:
app: abc
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "haproxy-ingress"
namespace: xyz
labels:
app.kubernetes.io/managed-by: {{ .Release.Service }}
annotations:
kubernetes.io/ingress.class: alb
From what I can see I do not think I have missed putting app.kubernetes.io/managed-by but still, I keep getting an error:
rendered manifests contain a resource that already exists. Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
It renders the file locally correctly.
helm list --all --all-namespaces returns nothing.
Please help.
You already have some resources, e.g. service abc in the given namespace, xyz that you're trying to install via a Helm chart.
Delete those and install them via helm install.
$ kubectl delete service -n <namespace> <service-name>
$ kubectl delete deployment -n <namespace> <deployment-name>
$ kubectl delete ingress -n <namespace> <ingress-name>
Once you have these resources deployed via Helm, you will be able to perform helm update to change properties.
Remove the "app.kubernetes.io/managed-by" label from your yaml's, this will be added by Helm.
The error below is quiet common:
label validation error: missing key "app.kubernetes.io/managed-by":
must be set to "Helm"; annotation validation error: missing key
"meta.helm.sh/release-name": must be set to ..
So I'll provide a bit longer explanation and also a context to the topic.
What happend?
It seems that you tried to create resources that were already exist and created outside of Helm (probably with kubectl).
Why Helm throw the error?
Helm doesn't allow a resource to be owned by more than one
deployment.
It is the responsibility of the chart creator to ensure that the chart
produce unique resources only.
How can you solve this?
Option 1 - Follow the error message and add the meta.helm.sh annotations:
As can be describe in this PR: Adopt resources into release with correct instance and managed-by labels
Helm will no longer error when attempting to create a resource that
already exists in the target cluster if the existing resource has the
correct meta.helm.sh/release-name and
meta.helm.sh/release-namespace annotations, and matches the label
selector app.kubernetes.io/managed-by=Helm. This facilitates
zero-downtime migrations to Helm 3 for managing existing deployments,
and allows Helm to "adopt" existing resources that it previously
created.
(*) I think that the meta.helm.sh scope is a less common approach today.
Option 2 - Add the app.kubernetes.io/instance label:
As can be seen in different Helm chart providers (Bitnami, Nginx ingress controller, External-Dns for example) - the combination of the two labels:
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
(*) Notice: There are some CD tools like ArgoCD that automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app.
Option 3 - Delete old resources.
It might be relevant in your specific case where the old resources might not be relevant anymore.
For those who need some context
What are those labels?
Shared labels and annotations share a common prefix: app.kubernetes.io. Labels without a prefix are private to users. The shared prefix ensures that shared labels do not interfere with custom user labels.
In order to take full advantage of using these labels, they should be applied on every resource object.
The app.kubernetes.io/managed-by label is used to describe the tool being used to manage the operation of an application - for example: helm.
Read more on the Recommended Labels section.
Are they added by helm?
No.
First of all, as mentioned before, those labels are not specific to Helm and Helm itself never requires that a particular label be present.
From the other hand, Helm docs recommend to use the following Standard Labels. app.kubernetes.io/managed-by is one of them and should be set to {{ .Release.Service }} in order to find all resources managed by Helm.
So it is the role of the chart maintainer to add those labels.
What is the best way to add them?
Many Helm chart providers adds them to the _helpers.tpl file and let all resources include it:
labels: {{ include "my-chart.labels" . | nindent 4 }}
The trick here is to chase the error message.
For example, in the below case the erro message points at something wrong with the 'service' in namespace 'xyz'
Unable to
continue with install: Service "abc" in namespace "xyz" exists and
cannot be imported into the current release: invalid ownership
metadata; label validation error: missing key
"app.kubernetes.io/managed-by": must be set to "Helm"; annotation
validation error: missing key "meta.helm.sh/release-name": must be set
to "abc"; annotation validation error: missing key
"meta.helm.sh/release-namespace": must be set to "default"
Simply delete the same service from the mentioned namespace with below:
kubectl -n xyz delete svc abc
And then try the installation/deployment again. It might so happen that similar issue may appear but for a different resource as shown in the below example:
Release "nok-sec-sip-tls-crd" does not exist. Installing it now.
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Role "nok-sec-sip-tls-crd-role" in namespace "debu" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "nok-sec-sip-tls-crd": current value is "nok-sec-sip"
Again use the kubectl command and delete the resource mentioned in the error message. For example, in the above case the error resource should be deleted with the below command:
kubectl delete role nok-sec-sip-tls-crd-role -n debu
I was getting this error because I was trying to upgrade the helm chart with wrong release name. So it conflicted with the existing resources in same namespace.
I was running this command with wrong releasename
helm upgrade --install --namespace <namespace> wrong-releasename <chart-folder>
and got the similar errors
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap \"cmname\" in namespace \"namespace\" exists and cannot be imported into the current release
invalid ownership metadata; label validation error: missing key \"app.kubernetes.io/managed-by\": must be set to \"Helm\"; annotation validation error: missing key \"meta.helm.sh/release-name\": must be set to \"wrong-releasename\"; annotation validation error: missing key \"meta.helm.sh/release-namespace\": must be set to \"namespace\"
I checked the existing helm releases in the same namespace and used the same name as the listed release name to upgrade my helmchart
helm ls -n <namespace>
helm upgrade --install --namespace <namespace> releasename <chart-folder>
Here's a faster and more thorough way to get rid of argo so it can be reinstalled :
helm list -A # see argocd in namespace argocd
helm uninstall argocd -n argocd
kubectl delete namespace argocd
The last line gets rid of all secrets and other resources not cleaned up by uninstalling the helm chart, and was needed in my environment, otherwise, I got the same sorts of errors about duplicate resources you were seeing.
We use GitOps via Flux, and I was getting the same rendered manifests contain a resource that already exists error. For me the problem was I accidentally defined a resource with the same name in two different files, so it was trying to create it twice. I removed the duplicate resource definition from one of the files to fix it up.

helm deploy with no objects

I'm doing a very simple chart with helm.
It consists on deploying a chart with just one object ("/templates/pod.yaml"), that have to be deployed just if a parameter of file Values.yaml is true.
To provide an example of my case, this is what I have:
/templates/pod.yaml
{{- if eq .Values.shoudBeDeployed true }}
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
{{- end}}
Values.yaml
shoudBeDeployed: true
So when I use shoudBeDeployed with true value, helm installs it correctly.
My problem is that when shoudBeDeployed is false, helm doesn't deploy anything (as I expected), but helm shows the following message:
Error: release CHART_NAME failed: no objects visited
And if I execute helm ls I get that CHART_NAME is deployed with STATUS FAILED.
My question is if there is a way to not have it as a failed helm deploy. So I would like to not see it when using the command helm ls
I know that I could move the logic of shoudBeDeployed variable outside the chart, and then deploy the chart or not depending on its value, but I would like to know if there is a solution just using helm.
#pcampana I think there is no way to stop helm deployment if there is nothing to deploy. But here is a trick that you can use to delete a helm chart if it is
FAILED.
helm install --name temp demo --atomic
where demo is the helm chart directory and temp is release name .
release name is mandatory for this to work.
One scenario is when you see error
Error: release temp failed: no objects visited
you can use above command to deploy helm chart.
I think this might be useful for you.

How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes

I've found a documentation about how to configure your NginX ingress controller using ConfigMap: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.
My ingress controller:
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
My config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-configmap
data:
proxy-read-timeout: "86400s"
client-max-body-size: "2g"
use-http2: "false"
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- my.endpoint.net
secretName: ingress-tls
rules:
- host: my.endpoint.net
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
- path: /api
backend:
serviceName: api
servicePort: 443
How do I make my Ingress to load the configuration from the ConfigMap?
I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller.
In order to load your ConfigMap you need to override it with your own (check out the namespace).
kind: ConfigMap
apiVersion: v1
metadata:
name: {name-of-the-helm-chart}-nginx-ingress-controller
namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
proxy-read-timeout: "86400"
proxy-body-size: "2g"
use-http2: "false"
The list of config properties can be found here.
One can pass config mag properties at the time of installation too:
helm install stable/nginx-ingress --name nginx-ingress --set controller.config.use-forwarded-headers='"true"'
NOTE: for non-string values had to use single quotes around double quotes to get it working.
If you used helm install to install the ingress-nginx, if no explicit value for which ConfigMap the nginx controller should look at was passed, the default value seems like it is {namespace}/{release-name}-nginx-ingress-controller. This is generated by https://github.com/helm/charts/blob/1e074fc79d0f2ee085ea75bf9bacca9115633fa9/stable/nginx-ingress/templates/controller-deployment.yaml#L67. (See similar if it's a dead link).
To verify for yourself, try to find your command that you installed the ingress-nginx chart with, and add --dry-run --debug to the command. This will show you the yaml files generated by Tiller to be applied to the cluster. The line # Source: nginx-ingress/templates/controller-deployment.yaml begins the controller deployment which has an arg of --configmap=. The value of this arg is what needs to be the name of the ConfigMap for the controller to sense, and use to update its own .conf file. This could be passed explicitly, but if it is not, it will have a default value.
If a ConfigMap is created with the RIGHT name, the controller's logs will show that it picked up the configuration change and reloaded itself.
This can be verified with kubectl logs <pod-name-of-controller> -n <namespace-arg-if-not-in-default-namespace>. My log messages contained the text Configuration changes detected, backend reload required. These log messages will not be present if the ConfigMap name was wrong.
I believe the official documentation for this is unnecessarily lacking, but maybe I'm incorrect? I will try to submit a PR with these details. Someone who knows more should help flesh them out so people don't need to stumble on this unnecessarily.
Cheers, thanks for your post.
If you want to give your own configuration while deploying nginx-ingress-controller, you can have a wrapper Helm chart over the original nginx-ingress Helm chart and provide your own values.yaml which can have custom configuration.
Using Helm 3 here.
Create a chart:
$ helm create custom-nginx
$ tree custom-nginx
So my chart structure looks like this:
custom-nginx/
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│   └── test-connection.yaml
└── values.yaml
There are a few extra things here. Specifically, I don't need the complete templates/ directory and its contents, so I'll just remove those:
$ rm custom-nginx/templates/*
$ rmdir custom-nginx/templates
Now, the chart structure should look like this:
custom-nginx/
├── Chart.yaml
├── charts
└── values.yaml
Since, we've to include the original nginx-ingress chart as a dependency, my Chart.yaml looks like this:
$ cat custom-nginx/Chart.yaml
apiVersion: v2
name: custom-nginx
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.39.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.32.0
dependencies:
- name: nginx-ingress
version: 1.39.1
repository: https://kubernetes-charts.storage.googleapis.com/
Here, appVersion is the nginx-controller docker image version and version matches with the nginx-ingress chart version that I am using.
The only thing left is to provide your custom configuration. Here is an stripped down version of my custom configuration:
$ cat custom-nginx/values.yaml
# Default values for custom-nginx.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nginx-ingress:
controller:
ingressClass: internal-nginx
replicaCount: 1
service:
externalTrafficPolicy: Local
publishService:
enabled: true
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: "80"
targetMemoryUtilizationPercentage: "80"
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory : 2Gi
metrics:
enabled: true
config:
compute-full-forwarded-for: "true"
We can check the keys that are available to use as configuration (config section in values.yaml) in https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
And the rest of the configuration can be found here: https://github.com/helm/charts/tree/master/stable/nginx-ingress#configuration
Once configurations are set, just download the dependency of your chart:
$ helm dependency update <path/to/chart>
It's a good practice to do basic checks on your chart before deploying it:
$ helm lint <path/to/chart>
$ helm install --debug --dry-run --namespace <namespace> <release-name> <path/to/chart>
Then deploy your chart (which will deploy your nginx-ingress-controller with your own custom configurations).
Also, since you've a chart now, you can upgrade and rollback your chart.
When installing the chart through terraform, the configuration values can be set as shown below:
resource "helm_release" "ingress_nginx" {
name = "nginx"
repository = "https://kubernetes.github.io/ingress-nginx/"
chart = "ingress-nginx"
set {
name = "version"
value = "v4.0.2"
}
set {
name = "controller.config.proxy-read-timeout"
value = "86400s"
}
set {
name = "controller.config.client-max-body-size"
value = "2g"
}
set {
name = "controller.config.use-http2"
value = "false"
}
}
Just to confirm #NeverEndingQueue answer above, the name of the config map is present in the nginx-controller pod spec itself, so if you inspect the yaml of the nginx-controller pod: kubectl get po release-name-nginx-ingress-controller-random-sequence -o yaml, under spec.containers, you will find something like:
- args:
- /nginx-ingress-controller
- --default-backend-service=default/release-name-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/release-name-nginx-ingress-controller
For example here, a config map named release-name-nginx-ingress-controller in the namespace default needs to be created.
Once done, you can verify if the changes have taken place by checking the logs. Normally, you will see something like:
I1116 10:35:45.174127 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"release-name-nginx-ingress-controller", UID:"76819abf-4df0-41e3-a3fe-25445e754f32", APIVersion:"v1", ResourceVersion:"62559702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/release-name-nginx-ingress-controller
I1116 10:35:45.184627 6 controller.go:141] Configuration changes detected, backend reload required.
I1116 10:35:45.396920 6 controller.go:157] Backend successfully reloaded.
When you apply ConfigMap configuration with needful key-value data, Ingress controller picks up this information and insert it to the nested nginx-ingress-controller Pod's original configuration file /etc/nginx/nginx.conf, therefore it's easy afterwards to verify whether ConfigMap's values have been successfully reflected or not, by checking actual nginx.conf inside the corresponded Pod.
You can also check logs from the relevant nginx-ingress-controller Pod in order to check whether ConfigMap data already reloaded to the backend nginx.conf, or if not to investigate the reason.
Using enable-underscores-in-headers=true worked for me not enable-underscores-in-headers='"true"'
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace ingress-basic
--set controller.config.enable-underscores-in-headers=true
I managed to update the "large-client-header-buffers" in the nginx via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Based on the NeverEndingQueue's answer I want to provide an update for Kubernetes v1.23 / Helm 3
This is my installation command + --dry-run --debug part:
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --dry-run --debug
This is the part we need from the generated output of the command above:
apiVersion: apps/v1
kind: DaemonSet
metadata:
...
spec:
...
template:
...
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
...
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --...
- --configmap=${POD_NAMESPACE}/ingress-nginx-controller
- --...
....
We need this part: --configmap=${POD_NAMESPACE}/ingress-nginx-controller.
As you can see, name of ConfigMap must be ingress-nginx-controller and namespace must be the one you use during chart installation (ie {POD_NAMESPACE}, in my example about this is --namespace ingress-nginx).
# nginx-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
map-hash-bucket-size: "128"
Then run kubectl apply -f nginx-config.yaml to apply ConfigMap and nginx's pod(s) will be auto-reloaded with updated config.
To check, that nginx config has been updated, find name of nginx's pod (you can use any one, if you have few nodes): kubectl get pods -n ingress-nginx (or kubectl get pods -A)
and then check config: kubectl exec -it ingress-nginx-controller-{generatedByKubernetesId} -n ingress-nginx cat /etc/nginx/nginx.conf
UPDATE:
The correct name (ie name: ingress-nginx-controller) is shown in the official docs. Conclusion: no need to reinvent the wheel.
What you have is an ingress yaml and not an Ingress controller deployment yaml , Ingress Controller is the Pod that actually does the work and usually is an nginx container itself. An example of such a configuration can be found here in the documentation you shared.
UPDATE
Using that example provided , you can also use following way to load config into nginx using config map
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
nginx-config contains your nginx configuration as part of config map
I read the above answers but could not make it work.
What worked for me was the following:
release_name=tcp-udp-ic
# add the helm repo from NginX and update the chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
echo "- Installing -${release_name}- into cluster ..."
#delete the config map if already exists
kubectl delete cm tcp-udp-ic-cm
helm del --purge ${release_name}
helm upgrade --install ${release_name} \
--set controller.image.tag=1.6.0 \
--set controller.config.name=tcp-udp-ic-cm \
nginx-stable/nginx-ingress --version 0.4.0 #--dry-run --debug
# update the /etc/nginx/nginx.conf file with my attributes, via the config map
kubectl apply -f tcp-udp-ic-cm.yaml
and the tcp-udp-ic-cm.yaml is :
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-udp-ic-cm
namespace: default
data:
worker-connections : "10000"
Essentially I need to deploy with helm the release and set the name of the config-map that is going to use. Helm creates the config-map but empty. Then I apply the config-map file in order to update the config-map resource with my values. This sequence is the only one i could make work.
An easier way of doing this is just modifying the values that's deployed through helm. The values needed to be changed to enter to ConfigMap are now in controller.config.entries. Show latest values with: helm show values nginx-stable/nginx-ingress and look for the format on the version you are running.
I had tons of issues with this since all references online said controller.config, until I checked with the command above.
After you've entered the values upgrade with:
helm upgrade -f <PATH_TO_FILE>.yaml <NAME> nginx-stable/nginx-ingress
The nginx ingress controller may cause issues with forwarding. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted.
Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
Keycloak v18 with --proxy edge
annotations:
kubernetes.io/ingress.class: haproxy
...

Helm how to define .Release.Name value

I have created basic helm template using helm create command. While checking the template for Ingress its adding the string RELEASE-NAME and appname like this RELEASE-NAME-microapp
How can I change .Release.Name value?
helm template --kube-version 1.11.1 microapp/
# Source: microapp/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-microapp
labels:
app: microapp
chart: microapp-0.1.0
release: RELEASE-NAME
heritage: Tiller
annotations:
kubernetes.io/ingress.class: nginx
This depends on what version of Helm you have; helm version can tell you this.
In Helm version 2, it's the value of the helm install --name parameter, or absent this, a name Helm chooses itself. If you're checking what might be generated via helm template that also takes a --name parameter.
In Helm version 3, it's the first parameter to the helm install command. Helm won't generate a name automatically unless you explicitly ask it to helm install --generate-name. helm template also takes the same options.
Also, in helm 3, if you want to specify a name explicitly, you should use the --name-template flag. e.g. helm template --name-template=dummy in order to use the name dummy instead of RELEASE-NAME
As of helm 3.9 the flag is --release-name, making the command: helm template --release-name <release name>