I try to set the value of the ssl-session-cache in my configmap for ingress-controller,
the problem is, that i can't find how to write it correct.
I need following changes in the nginx config:
ssl-session-cache builtin:3000 shared:SSL:100m
ssl-session-timeout: 3000
when i add
ssl-session-timeout: "3000" to the config map, it works correct - this i can see in nginx-config few seconds later.
but how i should write ssl-session-cache?
ssl-session-cache: builtin:"3000" shared:SSL:"100m" goes well, but no changes in nginx
ssl-session-cache: "builtin:3000 shared:SSL:100m" goes well, but no changes in nginx
ssl-session-cache "builtin:3000 shared:SSL:100m" syntax error - can't change the configmap
ssl-session-cache builtin:"3000 shared:SSL:100m" syntax error - can't change the configmap
Do someone have the idea, how to set ssl-session-cache in configmap correct?
Thank you!
TL;DR
After digging around and test the same scenario in my lab, I've found how to make it work.
As you can see here the parameter ssl-session-cache requires a boolean value to specify if it will be enabled or not.
The changes you need is handled by the parameter ssl_session_cache_size and requires a string, then is correct to suppose that it would work changing the value to builtin:3000 shared:SSL:100m but after reproduction and dive into the nginx configuration, I've concluded that it will not work because the option builtin:1000 is hardcoded.
In order to make it work as expected I've found a solution using a nginx template as a configMap mounted as a volume into nginx-controller pod and other configMap for make the changes in the parameter ssl_session_cache_size.
Workaround
Take a look in the line 343 from the file /etc/nginx/template in the nginx-ingress-controller pod:
bash-5.0$ grep -n 'builtin:' nginx.tmpl
343: ssl_session_cache builtin:1000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
As you can see, the option builtin:1000 is hardcoded and cannot be change using custom data on yout approach.
However, there are some ways to make it work, you could directly change the template file into the pod, but theses changes will be lost if the pod die for some reason... or you could use a custom template mounted as configMap into nginx-controller pod.
In this case, let's create a configMap with nginx.tmpl content changing the value of the line 343 for the desired value.
Get template file from nginx-ingress-controller pod, it will create a file callednginx.tmpl locally:
NOTE: Make sure the namespace is correct.
$ NGINX_POD=$(kubectl get pods -n ingress-nginx -l=app.kubernetes.io/component=controller -ojsonpath='{.items[].metadata.name}')
$ kubectl exec $NGINX_POD -n ingress-nginx -- cat template/nginx.tmpl > nginx.tmpl
Change the value of the line 343 from builtin:1000 to builtin:3000:
$ sed -i '343s/builtin:1000/builtin:3000/' nginx.tmpl
Checking if evething is ok:
$ grep builtin nginx.tmpl
ssl_session_cache builtin:3000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
Ok, at this point we have a nginx.tmpl file with the desired parameter changed.
Let's move on and create a configMap with the custom nginx.tmpl file:
$ kubectl create cm nginx.tmpl --from-file=nginx.tmpl
configmap/nginx.tmpl created
This will create a configMap called nginx.tmpl in the ingress-nginx namespace, if your ingress' namespace is different, make the proper changes before apply.
After that, we need to edit the nginx-ingress deployment and add a new volume and a volumeMount to the containers spec. In my case, the nginx-ingress deployment name ingress-nginx-controller in the ingress-nginx namespace.
Edit the deployment file:
$ kubectl edit deployment -n ingress-nginx ingress-nginx-controller
And add the following configuration in the correct places:
...
volumeMounts:
- mountPath: /etc/nginx/template
name: nginx-template-volume
readOnly: true
...
volumes:
- name: nginx-template-volume
configMap:
name: nginx.tmpl
items:
- key: nginx.tmpl
path: nginx.tmpl
...
After save the file, the nginx controller pod will be recreated with the configMap mounted as a file into the pod.
Let's check if the changes was propagated:
$ kubectl exec -n ingress-nginx $NGINX_POD -- cat nginx.conf | grep -n ssl_session_cache
223: ssl_session_cache builtin:3000 shared:SSL:10m;
Great, the first part is done!
Now for the shared:SSL:10m we can use the same approach you already was used: configMap with the specific parameters as mentioned in this doc.
If you remember in the nginx.tmpl, for shared:SSL there is a variable called SSLSessionCache ({{ $cfg.SSLSessionCacheSize }}), in the source code is possible to check that the variable is represented by the option ssl-session-cache-size:
340 // Size of the SSL shared cache between all worker processes.
341 // http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
342 SSLSessionCacheSize string `json:"ssl-session-cache-size,omitempty"`
So, all we need to do is create a configMap with this parameter and the desired value:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
ssl-session-cache-size: "100m"
Note: Adjust the namespace and configMap name for the equivalent of your environment.
Applying this configMap NGINX will reload the configuration and make the changes in the configuration file.
Checking the results:
$ NGINX_POD=$(kubectl get pods -n ingress-nginx -l=app.kubernetes.io/component=controller -ojsonpath='{.items[].metadata.name}')
$ kubectl exec -n ingress-nginx $NGINX_POD -- cat nginx.conf | grep -n ssl_session_cache
223: ssl_session_cache builtin:3000 shared:SSL:100m;
Conclusion
It would work as expected, unfortunately, I can't find a way to add a variable in the builtin:, so we will continue using it hardcoded but at this time it will be a configMap that you can easily make changes if needed.
References:
NGINX INgress Custom template
NGINX Ingress Source Code
Related
According to the ingress nginx controller doc, it remind the object will use the ingress-nginx namespace, and can change to other namespace with --watch-namespace tag.
But when I using
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx0.28.0/deploy/static/provider/aws/service-l7.yaml --watch-namespace=default
It report
Error: unknown flag: --watch-namespace
See 'kubectl apply --help' for usage.
You are messing up with one flag with others. By default following command will deploy the controller in ingress-nginx namespace. But you want it to be in some other namespace like default. To do so, you need to pass kubectl's flag like -n or --namespace.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx0.28.0/deploy/static/provider/aws/service-l7.yaml --namespace default
NB:
--watch-namespace is a flag of nginx-ingress-controller. It is used while running the binary inside the container. It needs to be set from deployment.spec.contianers[].args[]. It is used to bind the controller's watch in a single k8s namespace(by default it watches objects of all namespaces).
You need to set --watch-namespace in the args section of nginx ingress controller deployment yaml
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/$(NGINX_CONFIGMAP_NAME)
- --tcp-services-configmap=$(POD_NAMESPACE)/$(TCP_CONFIGMAP_NAME)
- --udp-services-configmap=$(POD_NAMESPACE)/$(UDP_CONFIGMAP_NAME)
- --publish-service=$(POD_NAMESPACE)/$(SERVICE_NAME)
- --annotations-prefix=nginx.ingress.kubernetes.io
- --watch-namespace=namespace
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/cloud-generic/deployment.yaml
I tried to automate the rolling update when the configmap changes are made. But, I am confused about how can I verify if the rolling update is successful or not. I found out the command
kubectl rollout status deployment test-app -n test
But I guess this is used when we are performing the rollback rather than for rolling update. What's the best way to know if the rolling update is successful or not?
I think it is fine,
kubectl rollout status deployments/test-app -n test can be used to verify the rollout deployment as well.
As an additional step, you can run,
kubectl rollout history deployments/test-app -n test as well.
if you need further clarity,
kubectl get deployments -o wide and check the READY and IMAGE fields.
ConfigMap generation and rolling update
I tried to automate the rolling update when the configmap changes are made
It is a good practice to create new resources instead of mutating (update in-place). kubectl kustomize is supporting this workflow:
The recommended way to change a deployment's configuration is to
create a new configMap with a new name,
patch the deployment, modifying the name value of the appropriate configMapKeyRef field.
You can deploy using Kustomize to automatically create a new ConfigMap every time you want to change content by using configMapGenerator. The old ones can be garbage collected when not used anymore.
Whith Kustomize configMapGenerator can you get a generated name.
Example
kind: ConfigMap
metadata:
name: example-configmap-2-g2hdhfc6tk
and this name get reflected to your Deployment that then trigger a new rolling update, but with a new ConfigMap and leave the old unchanged.
Deploy both Deployment and ConfigMap using
kubectl apply -k <kustomization_directory>
When handling change this way, you are following the practices called Immutable Infrastructure.
Verify deployment
To verify a successful deployment, you are right. You should use:
kubectl rollout status deployment test-app -n test
and when leaving the old ConfigMap unchanged but creating a new ConfigMap for the new ReplicaSet it is clear which ConfigMap belongs to which ReplicaSet.
Also rollback will be easier to understand since both old and new ReplicaSet use its own ConfigMap (on change of content).
Your command is fine to check if an update went through.
Now, a ConfigMap change eventually gets applied to the Pod. There is no need to do a rolling update for that. Depending on what you have passed in the ConfigMap, you probably could have restarted the service and that's it.
What's the best way to know if the rolling update is successful or not?
To check if you rolling update was executed correctly, you command works fine, you could check also if you replicas is running.
I tried to automate the rolling update when the configmap changes are made.
You could use Reloader to perform your rolling updates automatically when a configmap/secret changed.
Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.
Let's explore how Reloader works in a pratical way using a nginx deployment as exxample.
First install Reloader in your cluster:
kubectl apply -f https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
You will see a new container named reloader-reloader-... this container is responsible to 'watch' your deployments and make the rolling updates when necessary.
Create a configMap with your values, in my case I'll create a my-config and set a variable called myvar.value with value hello:
kubectl create configmap my-config --from-literal=myvar.value=hello
Now, let's create a simple deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-example
labels:
app: nginx
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
env:
- name: MYVAR
valueFrom:
configMapKeyRef:
name: my-config
key: myvar.value
In this example, the nginx image will be used getting the value from my configMap and set in a variable called MYVAR.
To Reloader works, you must specify the name of your configMap in annotations, in the example above it will be:
metadata:
annotations:
configmap.reloader.stakater.com/reload: my-config
Apply the deployment example with kubectl apply -f mydeplyment-example.yaml and check the variable from the new pod.
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hello
Now let's change the value of the variable:
Edit configmap with kubectl edit configmap my-config, change the value of myvar.value to hi, save and close.
After save, Reloader will recreate your container and get the new value from configmap.
To check if the rolling update was executed successfully:
kubectl rollout status deployment deployment-example
Check the new value:
$ kubectl exec -it $(kubectl get pods -l=app=nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') env | grep MYVAR
MYVAR=hi
That's all!
Check the Reloader github see more options to use.
I hope it helps!
Mounted ConfigMaps are updated automatically reference
To Check roll-out history used below steps
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
Update image on deployment as below
$ kubectl set image deployment.app/busybox *=busybox --record
deployment.apps/busybox image updated
Check new rollout history which will list the new change cause for rollout
$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
1 kubectl create --filename=busybox.yaml --record=true
2 kubectl set image deployment.app/busybox *=busybox --record=true
To Rollback Deployment : use undo flag along rollout command
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout undo deployment busybox deployment.apps/busybox rolled back
ubuntu#dlv-k8s-cluster-master:~$ kubectl rollout history deployment busybox
REVISION CHANGE-CAUSE
2 kubectl set image deployment.app/busybox *=busybox --record=true
3 kubectl create --filename=busybox.```
i am running a statefulset where i use volumeClaimTemplates. everything's fine there.
i also have a configmap where i would like to essentially replace some entries with the name of the pod for each pod that this config file is projected onto; eg, if the configmap data is:
ThisHost=<hostname -s>
OtherConfig1=1
OtherConfig1=2
...
then for the statefulset pod named mypod-0, the config file should contain ThisHost=mypod-0 and ThisHost=mypod-1 for mypod-1.
how could i do this?
The hostnames are contained in environment variables within the pod by default called HOSTNAME.
It is possible to modify the configmap itself if you first:
mount the configmap and set it to ThisHost=hostname -s (this will create a file in the pod's filesystem with that text)
pass a substitution command to the pod when starting (something like $ sed 's/hostname/$HOSTNAME/g' -i /path/to/configmapfile)
Basically, you mount the configmap and then replace it with the environment variable information that is available within the pod. It's just a substitution operation.
Look at the example below:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["sed"]
args: ["'s/hostname/$HOSTNAME'", "-i", "/path/to/config/map/mount/point"]
restartPolicy: OnFailure
The args' syntax might need some adjustments but you get the idea.
Please let me know if that helped.
I've found a documentation about how to configure your NginX ingress controller using ConfigMap: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.
My ingress controller:
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
My config map:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-configmap
data:
proxy-read-timeout: "86400s"
client-max-body-size: "2g"
use-http2: "false"
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- my.endpoint.net
secretName: ingress-tls
rules:
- host: my.endpoint.net
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
- path: /api
backend:
serviceName: api
servicePort: 443
How do I make my Ingress to load the configuration from the ConfigMap?
I've managed to display what YAML gets executed by Helm using the: --dry-run --debug options at the end of helm install command. Then I've noticed that there controller is executed with the: --configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller.
In order to load your ConfigMap you need to override it with your own (check out the namespace).
kind: ConfigMap
apiVersion: v1
metadata:
name: {name-of-the-helm-chart}-nginx-ingress-controller
namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
proxy-read-timeout: "86400"
proxy-body-size: "2g"
use-http2: "false"
The list of config properties can be found here.
One can pass config mag properties at the time of installation too:
helm install stable/nginx-ingress --name nginx-ingress --set controller.config.use-forwarded-headers='"true"'
NOTE: for non-string values had to use single quotes around double quotes to get it working.
If you used helm install to install the ingress-nginx, if no explicit value for which ConfigMap the nginx controller should look at was passed, the default value seems like it is {namespace}/{release-name}-nginx-ingress-controller. This is generated by https://github.com/helm/charts/blob/1e074fc79d0f2ee085ea75bf9bacca9115633fa9/stable/nginx-ingress/templates/controller-deployment.yaml#L67. (See similar if it's a dead link).
To verify for yourself, try to find your command that you installed the ingress-nginx chart with, and add --dry-run --debug to the command. This will show you the yaml files generated by Tiller to be applied to the cluster. The line # Source: nginx-ingress/templates/controller-deployment.yaml begins the controller deployment which has an arg of --configmap=. The value of this arg is what needs to be the name of the ConfigMap for the controller to sense, and use to update its own .conf file. This could be passed explicitly, but if it is not, it will have a default value.
If a ConfigMap is created with the RIGHT name, the controller's logs will show that it picked up the configuration change and reloaded itself.
This can be verified with kubectl logs <pod-name-of-controller> -n <namespace-arg-if-not-in-default-namespace>. My log messages contained the text Configuration changes detected, backend reload required. These log messages will not be present if the ConfigMap name was wrong.
I believe the official documentation for this is unnecessarily lacking, but maybe I'm incorrect? I will try to submit a PR with these details. Someone who knows more should help flesh them out so people don't need to stumble on this unnecessarily.
Cheers, thanks for your post.
If you want to give your own configuration while deploying nginx-ingress-controller, you can have a wrapper Helm chart over the original nginx-ingress Helm chart and provide your own values.yaml which can have custom configuration.
Using Helm 3 here.
Create a chart:
$ helm create custom-nginx
$ tree custom-nginx
So my chart structure looks like this:
custom-nginx/
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
There are a few extra things here. Specifically, I don't need the complete templates/ directory and its contents, so I'll just remove those:
$ rm custom-nginx/templates/*
$ rmdir custom-nginx/templates
Now, the chart structure should look like this:
custom-nginx/
├── Chart.yaml
├── charts
└── values.yaml
Since, we've to include the original nginx-ingress chart as a dependency, my Chart.yaml looks like this:
$ cat custom-nginx/Chart.yaml
apiVersion: v2
name: custom-nginx
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 1.39.1
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 0.32.0
dependencies:
- name: nginx-ingress
version: 1.39.1
repository: https://kubernetes-charts.storage.googleapis.com/
Here, appVersion is the nginx-controller docker image version and version matches with the nginx-ingress chart version that I am using.
The only thing left is to provide your custom configuration. Here is an stripped down version of my custom configuration:
$ cat custom-nginx/values.yaml
# Default values for custom-nginx.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
nginx-ingress:
controller:
ingressClass: internal-nginx
replicaCount: 1
service:
externalTrafficPolicy: Local
publishService:
enabled: true
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: "80"
targetMemoryUtilizationPercentage: "80"
resources:
requests:
cpu: 1
memory: 2Gi
limits:
cpu: 1
memory : 2Gi
metrics:
enabled: true
config:
compute-full-forwarded-for: "true"
We can check the keys that are available to use as configuration (config section in values.yaml) in https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
And the rest of the configuration can be found here: https://github.com/helm/charts/tree/master/stable/nginx-ingress#configuration
Once configurations are set, just download the dependency of your chart:
$ helm dependency update <path/to/chart>
It's a good practice to do basic checks on your chart before deploying it:
$ helm lint <path/to/chart>
$ helm install --debug --dry-run --namespace <namespace> <release-name> <path/to/chart>
Then deploy your chart (which will deploy your nginx-ingress-controller with your own custom configurations).
Also, since you've a chart now, you can upgrade and rollback your chart.
When installing the chart through terraform, the configuration values can be set as shown below:
resource "helm_release" "ingress_nginx" {
name = "nginx"
repository = "https://kubernetes.github.io/ingress-nginx/"
chart = "ingress-nginx"
set {
name = "version"
value = "v4.0.2"
}
set {
name = "controller.config.proxy-read-timeout"
value = "86400s"
}
set {
name = "controller.config.client-max-body-size"
value = "2g"
}
set {
name = "controller.config.use-http2"
value = "false"
}
}
Just to confirm #NeverEndingQueue answer above, the name of the config map is present in the nginx-controller pod spec itself, so if you inspect the yaml of the nginx-controller pod: kubectl get po release-name-nginx-ingress-controller-random-sequence -o yaml, under spec.containers, you will find something like:
- args:
- /nginx-ingress-controller
- --default-backend-service=default/release-name-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/release-name-nginx-ingress-controller
For example here, a config map named release-name-nginx-ingress-controller in the namespace default needs to be created.
Once done, you can verify if the changes have taken place by checking the logs. Normally, you will see something like:
I1116 10:35:45.174127 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"default", Name:"release-name-nginx-ingress-controller", UID:"76819abf-4df0-41e3-a3fe-25445e754f32", APIVersion:"v1", ResourceVersion:"62559702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap default/release-name-nginx-ingress-controller
I1116 10:35:45.184627 6 controller.go:141] Configuration changes detected, backend reload required.
I1116 10:35:45.396920 6 controller.go:157] Backend successfully reloaded.
When you apply ConfigMap configuration with needful key-value data, Ingress controller picks up this information and insert it to the nested nginx-ingress-controller Pod's original configuration file /etc/nginx/nginx.conf, therefore it's easy afterwards to verify whether ConfigMap's values have been successfully reflected or not, by checking actual nginx.conf inside the corresponded Pod.
You can also check logs from the relevant nginx-ingress-controller Pod in order to check whether ConfigMap data already reloaded to the backend nginx.conf, or if not to investigate the reason.
Using enable-underscores-in-headers=true worked for me not enable-underscores-in-headers='"true"'
helm install nginx-ingress ingress-nginx/ingress-nginx
--namespace ingress-basic
--set controller.config.enable-underscores-in-headers=true
I managed to update the "large-client-header-buffers" in the nginx via configmap. Here are the steps I have followed..
Find the configmap name in the nginx ingress controller pod describition
kubectl -n utility describe pods/test-nginx-ingress-controller-584dd58494-d8fqr |grep configmap
--configmap=test-namespace/test-nginx-ingress-controller
Note: In my case, the namespace is "test-namespace" and the configmap name is "test-nginx-ingress-controller"
Create a configmap yaml
cat << EOF > test-nginx-ingress-controller-configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: test-nginx-ingress-controller
namespace: test-namespace
data:
large-client-header-buffers: "4 16k"
EOF
Note: Please replace the namespace and configmap name as per finding in the step 1
Deploy the configmap yaml
kubectl apply -f test-nginx-ingress-controller-configmap.yaml
Then you will see the change is updated to nginx controller pod after mins
i.g.
kubectl -n test-namespace exec -it test-nginx-ingress-controller-584dd58494-d8fqr -- cat /etc/nginx/nginx.conf|grep large
large_client_header_buffers 4 16k;
Based on the NeverEndingQueue's answer I want to provide an update for Kubernetes v1.23 / Helm 3
This is my installation command + --dry-run --debug part:
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --dry-run --debug
This is the part we need from the generated output of the command above:
apiVersion: apps/v1
kind: DaemonSet
metadata:
...
spec:
...
template:
...
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
...
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --...
- --configmap=${POD_NAMESPACE}/ingress-nginx-controller
- --...
....
We need this part: --configmap=${POD_NAMESPACE}/ingress-nginx-controller.
As you can see, name of ConfigMap must be ingress-nginx-controller and namespace must be the one you use during chart installation (ie {POD_NAMESPACE}, in my example about this is --namespace ingress-nginx).
# nginx-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
map-hash-bucket-size: "128"
Then run kubectl apply -f nginx-config.yaml to apply ConfigMap and nginx's pod(s) will be auto-reloaded with updated config.
To check, that nginx config has been updated, find name of nginx's pod (you can use any one, if you have few nodes): kubectl get pods -n ingress-nginx (or kubectl get pods -A)
and then check config: kubectl exec -it ingress-nginx-controller-{generatedByKubernetesId} -n ingress-nginx cat /etc/nginx/nginx.conf
UPDATE:
The correct name (ie name: ingress-nginx-controller) is shown in the official docs. Conclusion: no need to reinvent the wheel.
What you have is an ingress yaml and not an Ingress controller deployment yaml , Ingress Controller is the Pod that actually does the work and usually is an nginx container itself. An example of such a configuration can be found here in the documentation you shared.
UPDATE
Using that example provided , you can also use following way to load config into nginx using config map
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
nginx-config contains your nginx configuration as part of config map
I read the above answers but could not make it work.
What worked for me was the following:
release_name=tcp-udp-ic
# add the helm repo from NginX and update the chart
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
echo "- Installing -${release_name}- into cluster ..."
#delete the config map if already exists
kubectl delete cm tcp-udp-ic-cm
helm del --purge ${release_name}
helm upgrade --install ${release_name} \
--set controller.image.tag=1.6.0 \
--set controller.config.name=tcp-udp-ic-cm \
nginx-stable/nginx-ingress --version 0.4.0 #--dry-run --debug
# update the /etc/nginx/nginx.conf file with my attributes, via the config map
kubectl apply -f tcp-udp-ic-cm.yaml
and the tcp-udp-ic-cm.yaml is :
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-udp-ic-cm
namespace: default
data:
worker-connections : "10000"
Essentially I need to deploy with helm the release and set the name of the config-map that is going to use. Helm creates the config-map but empty. Then I apply the config-map file in order to update the config-map resource with my values. This sequence is the only one i could make work.
An easier way of doing this is just modifying the values that's deployed through helm. The values needed to be changed to enter to ConfigMap are now in controller.config.entries. Show latest values with: helm show values nginx-stable/nginx-ingress and look for the format on the version you are running.
I had tons of issues with this since all references online said controller.config, until I checked with the command above.
After you've entered the values upgrade with:
helm upgrade -f <PATH_TO_FILE>.yaml <NAME> nginx-stable/nginx-ingress
The nginx ingress controller may cause issues with forwarding. While we were able to get it working with nginx, via X-Forwarded-Proto etc., but it was a bit complicated and convoluted.
Moving to haproxy instead resolved this problem. As well, make sure you are interfacing with the ingress controller over https or that may cause issues with keycloak.
Keycloak v18 with --proxy edge
annotations:
kubernetes.io/ingress.class: haproxy
...
I took the CKA exam and I needed to work with Daemonsets for quite a while there. Since it is much faster to do everything with kubectl instead of creating yaml manifests for k8s resources, I was wondering if it is possible to create Daemonset resources using kubectl.
I know that it is NOT possible to create it using regular kubectl create daemonset at least for now. And there is no description of it in the documentation. But maybe there is a way to do that in some different way?
The best thing I could do right now is to create Deployment first like kubectl create deployment and edit it's output manifest. Any options here?
The fastest hack is to create a deployment file using
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1
However, the following command will give a clean daemonset manifest considering that "apps/v1" is the api used for DaemonSet in your cluster
kubectl create deploy nginx --image=nginx --dry-run -o yaml | \
sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' > nginx-ds.yaml
You have your nginx DaemonSet.
CKA allows access to K8S documentation. So, it should be possible to get a sample YAML for different resources from there. Here is the one for the Daemonset from K8S documentation.
Also, not sure if the certification environment has access to resources in the kube-system namespace. If yes, then use the below command to get a sample yaml for Daemonset.
kubectl get daemonsets kube-flannel-ds-amd64 -o yaml -n=kube-system > daemonset.yaml
It's impossible. At least for Kubernetes 1.12. The only option is to get a sample Daemonset yaml file and go from there.
The fastest way to create
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1 , strategy {} and status {} as well.
Otherwise it shows error for some required fields like this
error: error validating "nginx-ds.yaml": error validating data: [ValidationError(DaemonSet.spec): unknown field "strategy" in io.k8s.api.apps.v1.DaemonSetSpec, ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus,ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false
There is no such option to create a DaemonSet using kubectl. But still, you can prepare a Yaml file with basic configuration for a DaemonSet, e.g. daemon-set-basic.yaml, and create it using kubectl create -f daemon-set-basic.yaml
You can edit new DaemonSet using kubectl edit daemonset <name-of-the-daemon-set>. Or modify the Yaml file and apply changes by kubectl apply -f daemon-set-basic.yaml. Note, if you want to update configuration modifying file and using apply command, it is better to use apply instead of create when you create the DaemonSet.
Here is the example of a simple DaemonSet:
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
You could take advantage of Kubernetes architecture to obtain definition of DaemonSet from existing cluster. Have a look at kube-proxy, which is a network component that runs on each node in your cluster.
kube-proxy is deployed as DaemonSet so you can extract its definition with below command.
$ kubectl get ds kube-proxy -n kube-system -o yaml > kube-proxy.ds.yaml
Warning!
By extracting definition of DaemonSet from kube-proxy be aware that:
You will have to do pliantly of clean up!
You will have to change apiVersion from extensions/v1beta1 to apps/v1
I used this by the following commands:
Either create Replicaset or deployment from Kubernetes imperative command
kubectl create deployment <daemonset_name> --image= --dry-run -o yaml > file.txt
Edit the kind and replace DaemonSet, remove replicas and strategy fields into it.
kubectl apply -f file.txt
During CKA examination you are allowed to access Kubernetes Documentation for DaemonSets. You could use the link and get examples of DaemonSet yaml files. However you could use the way you mentioned, change a deployment specification to DaemonSet specification. You need to change the kind to Daemonset, remove strategy, replicas and status fields. That would do.
Using command to deployment create and modifying it, one can create daemonset very quickly.
Below is one line command to create daemonset
kubectl create deployment elasticsearch --namespace=kube-system --image=k8s.gcr.io/fluentd-elasticsearch:1.20 --dry-run -o yaml | grep -v "creationTimestamp\|status" | awk '{gsub(/Deployment/, "DaemonSet"); print }'