Helm skips creation of istio virtual service - kubernetes

I am trying to create a helm chart for my service with the following structure:
.
├── my-app
│   ├── Chart.yaml
│   ├── templates
│   │   ├── deployment.yaml
│   │   ├── istio-virtualservice.yaml
│   │   └── service.yaml
│   └── values.yaml
After installing the helm chart the deployment and service are being created successfully but the virtualservice is not being created.
$ helm install -name my-app ./my-app -n my-namespace
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
my-app-5578cbb95-xzqzk 2/2 Running 0 5m
$ kubectl get vs
NAME GATEWAYS HOSTS AGE
<Empty>
My istio virtual service yaml files looks like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtual-service
spec:
hosts:
- {{$.Values.props.host | quote}}
gateways:
- my-api-gateway
http:
- match:
- uri:
prefix: /app/
rewrite:
uri: "/"
route:
- destination:
port:
number: 8083
host: my-service.my-namespace.svc.cluster.local
Surprisingly, if i apply the above yaml after helm install is done deploying the app then the virtualservice gets created.
$ kubectl apply -f istio-vs.yaml
$ kubectl get vs
NAME GATEWAYS HOSTS AGE
my-virtual-service [my-api-gateway] [my-host.com] 60s
Please help me debug the issue and let me know if more debug information is needed.
$ helm version
version.BuildInfo{Version:"v3.0.1", GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", GoVersion:"go1.13.4"}
$ istioctl version
client version: 1.4.1
control plane version: 1.4.1
data plane version: 1.4.1 (2 proxies)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:26:26Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.6-beta.0", GitCommit:"e7f962ba86f4ce7033828210ca3556393c377bcc", GitTreeState:"clean", BuildDate:"2020-01-15T08:18:29Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

Use
kubectl get vs -n my-namespace
instead of
kubectl get vs
That´s because you have deployed everything in the my-namespace namespace.
helm install -name my-app ./my-app -n my-namespace
And you´re searching for virtual service in the default namespace.
It´s working when you apply it by yourself, because there is no namespace in the virtual service yaml and it´s deployed in the default one.
Additional info, I see you have gateway which is already deployed, if it´s not in the same namespace as virtual service, you should add it like in below example.
Check the spec.gateways section
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo-Mongo
namespace: bookinfo-namespace
spec:
gateways:
- some-config-namespace/my-gateway # can omit the namespace if gateway is in same
namespace as virtual service.
I hope this answer your question. Let me know if you have any more questions.

Related

Helm - Release multi container application and partial release

We have below chart structure in helm which can deploy 'a microservice' to our k8s cluster
service-chart
|_templates
|_deployment.yaml
|_Ingress.yaml
|_service.yaml
|_Chart.yaml
|_valueBase.yaml
|_valueForService1.yaml
|_valueForService2.yaml
..
..
|_valueForService{n}.yaml
valueBase.yaml contains default values for a service (for eg. limits, replicas etc)
global:
namespace: teamname
environment: staging
limits:
memory: "500Mi"
cpu: "300m"
deployments:
replicas: 1
probes:
path: "/"
and valueForService1.yaml file contains values overridden for a specific service
app:
name: "service1"
image:
name: "service1"
version: "2021.11.xxx"
port: 50001
deployments:
replicas: 3
All of above services follow exactly same structure and create similar resources i.e. a service, a pod and an ingress.
we deploy an individual service as
helm upgrade --install -f valueBase.yaml -f valueForService1.yaml service1 .
But problem is we have 50 of these services.
and we would want to install all of them together instead of running 50 commands back to back.
Also would like to have some release co-ordination in between releasing multiple services.
for eg. release service1 before releasing service2.
I know we can do this in one command with 50 -f parameters in it but that's not the kind of solution I am here for.
How can we package them properly so we can
release all services at once when we want to.
release an individual service
release them as group of service for eg. group1 consist of service1, service2 and service3
release orchestration for eg. release service1 then service2 then service3
All suggestions welcomed. Feel free to ask for more details.
Note: We tried using sub-chart but it doesn't appear to be solved by using sub-charts.
We just have lots of services with similar structure.
but I might have used sub charts completely wrong.
What you're looking for is helmfile
Store your base chart and services values in the same folder.
Create helmfile.yml.
releases:
- name: service1
labels:
app: service1
group: group1
chart: charts/baseChart
values:
- valueBase.yaml
- valueForService1.yaml
- fullnameOverride: service1
- name: service2
labels:
app: service2
group: group1
chart: charts/baseChart
values:
- valueBase.yaml
- valueForService2.yaml
- fullnameOverride: service2
needs:
- service1
Each release in the releases list is a helm release definition. You can control installation order with the needs keyword.
Now your folder looks something like this
.
├── charts/
│ └── baseChart/
├── helmfile.yml
├── valueBase.yml
├── valueForService1.yml
└── valueForService2.yml
and you're ready to run helmfile commands
helmfile -n <namespace> apply will release all services at once
helmfile -n <namespace> -l app=service1 apply will release an individual service with the app: service label
helmfile -n <namespace> -l group=group1 apply will release all services with the group: group1 label

Unknown field "setHostnameAsFQDN" despite using latest kubectl client

I have a deployment yaml file that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
setHostnameAsFQDN: true
hostname: hello
subdomain: world
containers:
- name: hello-kubernetes
image: redis
However, I am getting this error:
$ kubectl apply -f dep.yaml
error: error validating "dep.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "setHostnameAsFQDN" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
My kubectl version:
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
After specifying --validate=falsee, hostname and hostname -f still return different values.
I believe I missunderstood something. Doc says that setHostnameAsFQDN will be available from kubernetes v1.20
You showed kubectl version. Your kubernetes version also need to be v1.20. Make sure you are using kubernetes version v1.20.
Use kubectl version for seeing both client and server version. Where client version refers to kubectl version and server version refers to kubernetes version.
As far the k8s v1.20 release note doc: Previously introduced in 1.19 behind a feature gate, SetHostnameAsFQDN is now enabled by default. More details on this behavior is available in documentation for DNS for Services and Pods

Exclude Resource in kustomization.yaml

I have a kustomize base that I'd like to re-use without editing it. Unfortunately, it creates a namespace I don't want to create. I'd like to simply remove that resource from consideration when compiling the manifests and add a resource for mine since I can't patch a namespace to change the name.
Can this be done? How?
You can omit the specific resource by using a delete directive of Strategic Merge Patch like this.
Folder structure
$ tree .
.
├── base
│   ├── kustomization.yaml
│   └── namespace.yaml
└── overlays
├── dev
│   └── kustomization.yaml
└── prod
├── delete-ns-b.yaml
└── kustomization.yaml
File content
$ cat base/kustomization.yaml
resources:
- namespace.yaml
$ cat base/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/dev/kustomization.yaml
bases:
- ../../base
$ cat overlays/prod/delete-ns-b.yaml
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ cat overlays/prod/kustomization.yaml
bases:
- ../../base
patchesStrategicMerge:
- delete-ns-b.yaml
Behavior of kustomize
$ kustomize build overlays/dev
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
---
apiVersion: v1
kind: Namespace
metadata:
name: ns-b
$ kustomize build overlays/prod
apiVersion: v1
kind: Namespace
metadata:
name: ns-a
In this case, we have two namespaces in the base folder. In dev, kustomize produces 2 namespaces because there is no patch. But, in prod, kustomize produces only one namespace because delete patch deletes namespace ns-b.
I found that my understanding of not being able to change a namespace name was incorrect. Using the patch capability, you actually can change the name of a resource including namespaces.
This is what I ended up using:
patches:
- target:
kind: Namespace
name: application
patch: |-
- op: replace
path: /metadata/name
value: my-application
I encountered this problem and eventually took a different approach to solving it. It's worth thinking back through your requirements and asking yourself why you would want kustomize to omit a resource? In my case - and I would imagine this is the most common use-case - I wanted kustomize to omit a resource because I didn't want to apply it to the target kubernetes cluster but kustomize doesn't provide an easy way to do this. Would it not be better for the filtration to take place when applying the resource to the cluster rather than when generating them? The solution which I eventually applied was to filter the resources by label when applying to the cluster. You can add an exclusion label in an overlay to prevent the resource from being applied.
e.g.
$ kustomize build . | kubectl apply -l apply-resource!=no -f -

Google cloud Kubernetes deployment error: Field is immutable

After fixing the problem from this topic Can't use Google Cloud Kubernetes substitutions (yaml files are all there, to not copy-paste them once again) I got a new problem. Making a new topic because there is the correct answer for the previous one.
Step #2: Running: kubectl apply -f deployment.yaml
Step #2: Warning:
kubectl apply should be used on resource created by either kubectl
create --save-config or kubectl apply
Step #2: The Deployment
"myproject" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"app":"myproject",
"run":"myproject"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
immutable
I've checked similar issues but hasn't been able to find anything related.
Also, is that possible that this error related to upgrading App Engine -> Docker -> Kubernetes? I created valid configuration on each step. Maybe there are some things that were created and immutable now? What should I do in this case?
One more note, maybe that matters, it says "kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply" (you can see above), but executing
kubectl create deployment myproject --image=gcr.io/myproject/myproject
gives me this
Error from server (AlreadyExists): deployments.apps "myproject" already exists
which is actually expected, but, at the same time, controversial with warning above (at least from my prospective)
Any idea?
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Current YAML file:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
'-t',
'gcr.io/$PROJECT_ID/myproject:latest',
'.'
]
- name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'deployment.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'myproject',
'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- 'DB_PORT=5432'
- 'DB_SCHEMA=public'
- 'TYPEORM_CONNECTION=postgres'
- 'FE=myproject'
- 'V=1'
- 'CLEAR_DB=true'
- 'BUCKET_NAME=myproject'
- 'BUCKET_TYPE=google'
- 'KMS_KEY_NAME=storagekey'
timeout: 1600s
images:
- 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/myproject:latest
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproject
spec:
replicas: 1
selector:
matchLabels:
app: myproject
template:
metadata:
labels:
app: myproject
spec:
containers:
- name: myproject
image: gcr.io/myproject/github.com/weekendman/{{repo name here}}:latest
ports:
- containerPort: 80
From apps/v1 on, a Deployment’s label selector is immutable after it gets created.
excerpt from Kubernetes's document:
Note: In API version apps/v1, a Deployment’s label selector is
immutable after it gets created.
So, you can delete this deployment first, then apply it.
The MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable because it is different from your previous deployment.
Try looking at the existing deployment with kubectl get deployment -o yaml. I suspect the existing yaml has a different matchLables stanza.
Specifically your file has:
matchLabels:
app: myproject
my guess is the output of kubectl get deployment -o yaml while have something different, like:
matchLabels:
app: old-project-name
or
matchLabels:
app: myproject
version: alpha
The new deployment cannot change the matchLabels stanza because, well, because it is immutable. That stanza in the new deployment must match the old. If you want to change it, you need to delete the old deployment with kubectl delete deployment myproject.
Note: if you do that in production your app will be unavailable for a while. (A much longer discussion about how to do this in production is not useful here.)

How to use ConfigMap configuration with Helm NginX Ingress controller - Kubernetes

I've found a documentation about how to configure your NginX ingress controller using ConfigMap: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.
My ingress controller:
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
My config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-configmap
data:
proxy-read-timeout: "86400s"
client-max-body-size: "2g"
use-http2: "false"
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- my.endpoint.net
secretName: ingress-tls
rules:
- host: my.endpoint.net
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
- path: /api
backend:
serviceName: api
servicePort: 443
How do I make my Ingress to load the configuration from the ConfigMap?
I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller.
In order to load your ConfigMap you need to override it with your own (check out the namespace).
kind: ConfigMap
apiVersion: v1
metadata:
name: {name-of-the-helm-chart}-nginx-ingress-controller
namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
proxy-read-timeout: "86400"
proxy-body-size: "2g"
use-http2: "false"
The list of config properties can be found here.
One can pass config mag properties at the time of installation too:
helm install stable/nginx-ingress --name nginx-ingress --set controller.config.use-forwarded-headers='"true"'
NOTE: for non-string values had to use single quotes around double quotes to get it working.
If you used helm install to install the ingress-nginx, if no explicit value for which ConfigMap the nginx controller should look at was passed, the default value seems like it is {namespace}/{release-name}-nginx-ingress-controller. This is generated by https://github.com/helm/charts/blob/1e074fc79d0f2ee085ea75bf9bacca9115633fa9/stable/nginx-ingress/templates/controller-deployment.yaml#L67. (See similar if it's a dead link).
To verify for yourself, try to find your command that you installed the ingress-nginx chart with, and add --dry-run --debug to the command. This will show you the yaml files generated by Tiller to be applied to the cluster. The line # Source: nginx-ingress/templates/controller-deployment.yaml begins the controller deployment which has an arg of --configmap=. The value of this arg is what needs to be the name of the ConfigMap for the controller to sense, and use to update its own .conf file. This could be passed explicitly, but if it is not, it will have a default value.
If a ConfigMap is created with the RIGHT name, the controller's logs will show that it picked up the configuration change and reloaded itself.
This can be verified with kubectl logs <pod-name-of-controller> -n <namespace-arg-if-not-in-default-namespace>. My log messages contained the text Configuration changes detected, backend reload required. These log messages will not be present if the ConfigMap name was wrong.
I believe the official documentation for this is unnecessarily lacking, but maybe I'm incorrect? I will try to submit a PR with these details. Someone who knows more should help flesh them out so people don't need to stumble on this unnecessarily.
Cheers, thanks for your post.
If you want to give your own configuration while deploying nginx-ingress-controller, you can have a wrapper Helm chart over the original nginx-ingress Helm chart and provide your own values.yaml which can have custom configuration.
Using Helm 3 here.
Create a chart:
$ helm create custom-nginx
$ tree custom-nginx
So my chart structure looks like this:
custom-nginx/
├── Chart.yaml
├── charts
├── templates
│   ├── NOTES.txt
│   ├── _helpers.tpl
│   ├── deployment.yaml
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── service.yaml
│   ├── serviceaccount.yaml
│   └── tests
│   └── test-connection.yaml
└── values.yaml
There are a few extra things here. Specifically, I don't need the complete templates/ directory and its contents, so I'll just remove those:
$ rm custom-nginx/templates/*
$ rmdir custom-nginx/templates
Now, the chart structure should look like this:
custom-nginx/
├── Chart.yaml
├── charts
└── values.yaml
Since, we've to include the original nginx-ingress chart as a dependency, my Chart.yaml looks like this:
$ cat custom-nginx/Chart.yaml
apiVersion: v2
name: custom-nginx
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.39.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.32.0
dependencies:
- name: nginx-ingress
version: 1.39.1
repository: https://kubernetes-charts.storage.googleapis.com/
Here, appVersion is the nginx-controller docker image version and version matches with the nginx-ingress chart version that I am using.
The only thing left is to provide your custom configuration. Here is an stripped down version of my custom configuration:
$ cat custom-nginx/values.yaml
# Default values for custom-nginx.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nginx-ingress:
controller:
ingressClass: internal-nginx
replicaCount: 1
service:
externalTrafficPolicy: Local
publishService:
enabled: true
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: "80"
targetMemoryUtilizationPercentage: "80"
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory : 2Gi
metrics:
enabled: true
config:
compute-full-forwarded-for: "true"
We can check the keys that are available to use as configuration (config section in values.yaml) in https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
And the rest of the configuration can be found here: https://github.com/helm/charts/tree/master/stable/nginx-ingress#configuration
Once configurations are set, just download the dependency of your chart:
$ helm dependency update <path/to/chart>
It's a good practice to do basic checks on your chart before deploying it:
$ helm lint <path/to/chart>
$ helm install --debug --dry-run --namespace <namespace> <release-name> <path/to/chart>
Then deploy your chart (which will deploy your nginx-ingress-controller with your own custom configurations).
Also, since you've a chart now, you can upgrade and rollback your chart.
When installing the chart through terraform, the configuration values can be set as shown below:
resource "helm_release" "ingress_nginx" {
name = "nginx"
repository = "https://kubernetes.github.io/ingress-nginx/"
chart = "ingress-nginx"
set {
name = "version"
value = "v4.0.2"
}
set {
name = "controller.config.proxy-read-timeout"
value = "86400s"
}
set {
name = "controller.config.client-max-body-size"
value = "2g"
}
set {
name = "controller.config.use-http2"
value = "false"
}
}
Just to confirm #NeverEndingQueue answer above, the name of the config map is present in the nginx-controller pod spec itself, so if you inspect the yaml of the nginx-controller pod: kubectl get po release-name-nginx-ingress-controller-random-sequence -o yaml, under spec.containers, you will find something like:
- args:
- /nginx-ingress-controller
- --default-backend-service=default/release-name-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/release-name-nginx-ingress-controller
For example here, a config map named release-name-nginx-ingress-controller in the namespace default needs to be created.
Once done, you can verify if the changes have taken place by checking the logs. Normally, you will see something like:
I1116 10:35:45.174127 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"release-name-nginx-ingress-controller", UID:"76819abf-4df0-41e3-a3fe-25445e754f32", APIVersion:"v1", ResourceVersion:"62559702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/release-name-nginx-ingress-controller
I1116 10:35:45.184627 6 controller.go:141] Configuration changes detected, backend reload required.
I1116 10:35:45.396920 6 controller.go:157] Backend successfully reloaded.
When you apply ConfigMap configuration with needful key-value data, Ingress controller picks up this information and insert it to the nested nginx-ingress-controller Pod's original configuration file /etc/nginx/nginx.conf, therefore it's easy afterwards to verify whether ConfigMap's values have been successfully reflected or not, by checking actual nginx.conf inside the corresponded Pod.
You can also check logs from the relevant nginx-ingress-controller Pod in order to check whether ConfigMap data already reloaded to the backend nginx.conf, or if not to investigate the reason.
Using enable-underscores-in-headers=true worked for me not enable-underscores-in-headers='"true"'
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace ingress-basic
--set controller.config.enable-underscores-in-headers=true
I managed to update the "large-client-header-buffers" in the nginx via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Based on the NeverEndingQueue's answer I want to provide an update for Kubernetes v1.23 / Helm 3
This is my installation command + --dry-run --debug part:
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --dry-run --debug
This is the part we need from the generated output of the command above:
apiVersion: apps/v1
kind: DaemonSet
metadata:
...
spec:
...
template:
...
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
...
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --...
- --configmap=${POD_NAMESPACE}/ingress-nginx-controller
- --...
....
We need this part: --configmap=${POD_NAMESPACE}/ingress-nginx-controller.
As you can see, name of ConfigMap must be ingress-nginx-controller and namespace must be the one you use during chart installation (ie {POD_NAMESPACE}, in my example about this is --namespace ingress-nginx).
# nginx-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
map-hash-bucket-size: "128"
Then run kubectl apply -f nginx-config.yaml to apply ConfigMap and nginx's pod(s) will be auto-reloaded with updated config.
To check, that nginx config has been updated, find name of nginx's pod (you can use any one, if you have few nodes): kubectl get pods -n ingress-nginx (or kubectl get pods -A)
and then check config: kubectl exec -it ingress-nginx-controller-{generatedByKubernetesId} -n ingress-nginx cat /etc/nginx/nginx.conf
UPDATE:
The correct name (ie name: ingress-nginx-controller) is shown in the official docs. Conclusion: no need to reinvent the wheel.
What you have is an ingress yaml and not an Ingress controller deployment yaml , Ingress Controller is the Pod that actually does the work and usually is an nginx container itself. An example of such a configuration can be found here in the documentation you shared.
UPDATE
Using that example provided , you can also use following way to load config into nginx using config map
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
nginx-config contains your nginx configuration as part of config map
I read the above answers but could not make it work.
What worked for me was the following:
release_name=tcp-udp-ic
# add the helm repo from NginX and update the chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
echo "- Installing -${release_name}- into cluster ..."
#delete the config map if already exists
kubectl delete cm tcp-udp-ic-cm
helm del --purge ${release_name}
helm upgrade --install ${release_name} \
--set controller.image.tag=1.6.0 \
--set controller.config.name=tcp-udp-ic-cm \
nginx-stable/nginx-ingress --version 0.4.0 #--dry-run --debug
# update the /etc/nginx/nginx.conf file with my attributes, via the config map
kubectl apply -f tcp-udp-ic-cm.yaml
and the tcp-udp-ic-cm.yaml is :
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-udp-ic-cm
namespace: default
data:
worker-connections : "10000"
Essentially I need to deploy with helm the release and set the name of the config-map that is going to use. Helm creates the config-map but empty. Then I apply the config-map file in order to update the config-map resource with my values. This sequence is the only one i could make work.
An easier way of doing this is just modifying the values that's deployed through helm. The values needed to be changed to enter to ConfigMap are now in controller.config.entries. Show latest values with: helm show values nginx-stable/nginx-ingress and look for the format on the version you are running.
I had tons of issues with this since all references online said controller.config, until I checked with the command above.
After you've entered the values upgrade with:
helm upgrade -f <PATH_TO_FILE>.yaml <NAME> nginx-stable/nginx-ingress
The nginx ingress controller may cause issues with forwarding. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted.
Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
Keycloak v18 with --proxy edge
annotations:
kubernetes.io/ingress.class: haproxy
...