According to the ingress nginx controller doc, it remind the object will use the ingress-nginx namespace, and can change to other namespace with --watch-namespace tag.
But when I using
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx0.28.0/deploy/static/provider/aws/service-l7.yaml --watch-namespace=default
It report
Error: unknown flag: --watch-namespace
See 'kubectl apply --help' for usage.
You are messing up with one flag with others. By default following command will deploy the controller in ingress-nginx namespace. But you want it to be in some other namespace like default. To do so, you need to pass kubectl's flag like -n or --namespace.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx0.28.0/deploy/static/provider/aws/service-l7.yaml --namespace default
NB:
--watch-namespace is a flag of nginx-ingress-controller. It is used while running the binary inside the container. It needs to be set from deployment.spec.contianers[].args[]. It is used to bind the controller's watch in a single k8s namespace(by default it watches objects of all namespaces).
You need to set --watch-namespace in the args section of nginx ingress controller deployment yaml
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/$(NGINX_CONFIGMAP_NAME)
- --tcp-services-configmap=$(POD_NAMESPACE)/$(TCP_CONFIGMAP_NAME)
- --udp-services-configmap=$(POD_NAMESPACE)/$(UDP_CONFIGMAP_NAME)
- --publish-service=$(POD_NAMESPACE)/$(SERVICE_NAME)
- --annotations-prefix=nginx.ingress.kubernetes.io
- --watch-namespace=namespace
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/cloud-generic/deployment.yaml
Related
I am just exploring and want to helm my k8dash, but got the weird error since I have been able to deploy on AWS EKS.
I am running them on my Minikube V1.23.2
My helm version is v3.6.2
Kubernetes kubectl version is v1.22.3
Basically if I do helm template, the VirtualServer would be like this:
apiVersion: k8s.nginx.org/v1
kind: VirtualServer
metadata:
name: k8dash
namespace: k8dash
spec:
host: namahost.com
routes:
- action:
pass: RELEASE-NAME
path: /
upstreams:
- name: RELEASE-NAME
port: 80
service: RELEASE-NAME
and I got this error:
Error: unable to build Kubernetes objects from release manifest: unable to recognize "": no matches for kind "VirtualServer" in version "k8s.nginx.org/v1"
It's weird, deploying this one on AWS EKS just fine but locally got this error and I could not find any clue while Googling. Does it has something to do with my tools version?
You have to install additional CRDs as both VirtualServer and VirtualServerRoute are not oob, but nginx resources.
CustomResourceDefinitions:
The CustomResourceDefinition API resource allows you to define custom
resources. Defining a CRD object creates a new custom resource with a
name and schema that you specify. The Kubernetes API serves and
handles the storage of your custom resource. The name of a CRD object
must be a valid DNS subdomain name.
This frees you from writing your own API server to handle the custom
resource, but the generic nature of the implementation means you have
less flexibility than with API server aggregation.
Nginx Create Custom Resources
Note: By default, it is required to create custom resource definitions
for VirtualServer, VirtualServerRoute, TransportServer and Policy.
Otherwise, the Ingress Controller pods will not become Ready. If you’d
like to disable that requirement, configure -enable-custom-resources
command-line argument to false and skip this section.
Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources.
You can find crds under https://github.com/nginxinc/kubernetes-ingress/tree/master/deployments/common/crds:
$ git clone https://github.com/nginxinc/kubernetes-ingress/
$ cd kubernetes-ingress/deployments
$ git checkout v2.0.3 (or latest, as you wish)
$ kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
$ kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
After successful applying you will be able to create both VirtualServer and VirtualServerRoute
I have a pod in my kubernetes which needed an update to have securityContext. So generated a yaml file using -
kubectl get pod pod_name -o yaml > mypod.yaml
After updating the required securityContext and executing command -
kubectl apply -f mypod.yaml
no changes are observed in pod.
Where as a fresh newly created yaml file works perfectly fine.
new yaml file -
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: default
spec:
securityContext:
runAsUser: 1010
containers:
- command:
- sleep
- "4800"
image: ubuntu
name: myubuntuimage
Immutable fields
In Kubernetes you can find information about Immutable fields.
A lot of fields in APIs tend to be immutable, they can't be changed after creation. This is true for example for many of the fields in pods. There is currently no way to declaratively specify that fields are immutable, and one has to rely on either built-in validation for core types, or have to build a validating webhooks for CRDs.
Why ?
There are resources in Kubernetes which have immutable fields by design, i.e. after creation of an object, those fields cannot be mutated anymore. E.g. a pod's specification is mostly unchangeable once it is created. To change the pod, it must be deleted, recreated and rescheduled.
Editing existing pod configuration
If you want to apply new config with security context using kubectl apply you will get error like below:
The Pod "mypod" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
Same output will be if you would use kubectl patch
kubectl patch pod mypod -p '{"spec":{"securityContext":{"runAsUser":1010}}}'
Also kubectl edit will not change this specific configuration
$ kubectl edit pod
Edit cancelled, no changes made.
Solution
If you need only one pod, you must delete it and create new one with requested configuration.
Better solution is to use resource which will make sure to fulfil some own requirements, like Deployment. After change of the current configuration, deployment will create new Replicaset which will create new pods with new configuration.
by updating the PodTemplateSpec of the Deployment. A new ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
I try to set the value of the ssl-session-cache in my configmap for ingress-controller,
the problem is, that i can't find how to write it correct.
I need following changes in the nginx config:
ssl-session-cache builtin:3000 shared:SSL:100m
ssl-session-timeout: 3000
when i add
ssl-session-timeout: "3000" to the config map, it works correct - this i can see in nginx-config few seconds later.
but how i should write ssl-session-cache?
ssl-session-cache: builtin:"3000" shared:SSL:"100m" goes well, but no changes in nginx
ssl-session-cache: "builtin:3000 shared:SSL:100m" goes well, but no changes in nginx
ssl-session-cache "builtin:3000 shared:SSL:100m" syntax error - can't change the configmap
ssl-session-cache builtin:"3000 shared:SSL:100m" syntax error - can't change the configmap
Do someone have the idea, how to set ssl-session-cache in configmap correct?
Thank you!
TL;DR
After digging around and test the same scenario in my lab, I've found how to make it work.
As you can see here the parameter ssl-session-cache requires a boolean value to specify if it will be enabled or not.
The changes you need is handled by the parameter ssl_session_cache_size and requires a string, then is correct to suppose that it would work changing the value to builtin:3000 shared:SSL:100m but after reproduction and dive into the nginx configuration, I've concluded that it will not work because the option builtin:1000 is hardcoded.
In order to make it work as expected I've found a solution using a nginx template as a configMap mounted as a volume into nginx-controller pod and other configMap for make the changes in the parameter ssl_session_cache_size.
Workaround
Take a look in the line 343 from the file /etc/nginx/template in the nginx-ingress-controller pod:
bash-5.0$ grep -n 'builtin:' nginx.tmpl
343: ssl_session_cache builtin:1000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
As you can see, the option builtin:1000 is hardcoded and cannot be change using custom data on yout approach.
However, there are some ways to make it work, you could directly change the template file into the pod, but theses changes will be lost if the pod die for some reason... or you could use a custom template mounted as configMap into nginx-controller pod.
In this case, let's create a configMap with nginx.tmpl content changing the value of the line 343 for the desired value.
Get template file from nginx-ingress-controller pod, it will create a file callednginx.tmpl locally:
NOTE: Make sure the namespace is correct.
$ NGINX_POD=$(kubectl get pods -n ingress-nginx -l=app.kubernetes.io/component=controller -ojsonpath='{.items[].metadata.name}')
$ kubectl exec $NGINX_POD -n ingress-nginx -- cat template/nginx.tmpl > nginx.tmpl
Change the value of the line 343 from builtin:1000 to builtin:3000:
$ sed -i '343s/builtin:1000/builtin:3000/' nginx.tmpl
Checking if evething is ok:
$ grep builtin nginx.tmpl
ssl_session_cache builtin:3000 shared:SSL:{{ $cfg.SSLSessionCacheSize }};
Ok, at this point we have a nginx.tmpl file with the desired parameter changed.
Let's move on and create a configMap with the custom nginx.tmpl file:
$ kubectl create cm nginx.tmpl --from-file=nginx.tmpl
configmap/nginx.tmpl created
This will create a configMap called nginx.tmpl in the ingress-nginx namespace, if your ingress' namespace is different, make the proper changes before apply.
After that, we need to edit the nginx-ingress deployment and add a new volume and a volumeMount to the containers spec. In my case, the nginx-ingress deployment name ingress-nginx-controller in the ingress-nginx namespace.
Edit the deployment file:
$ kubectl edit deployment -n ingress-nginx ingress-nginx-controller
And add the following configuration in the correct places:
...
volumeMounts:
- mountPath: /etc/nginx/template
name: nginx-template-volume
readOnly: true
...
volumes:
- name: nginx-template-volume
configMap:
name: nginx.tmpl
items:
- key: nginx.tmpl
path: nginx.tmpl
...
After save the file, the nginx controller pod will be recreated with the configMap mounted as a file into the pod.
Let's check if the changes was propagated:
$ kubectl exec -n ingress-nginx $NGINX_POD -- cat nginx.conf | grep -n ssl_session_cache
223: ssl_session_cache builtin:3000 shared:SSL:10m;
Great, the first part is done!
Now for the shared:SSL:10m we can use the same approach you already was used: configMap with the specific parameters as mentioned in this doc.
If you remember in the nginx.tmpl, for shared:SSL there is a variable called SSLSessionCache ({{ $cfg.SSLSessionCacheSize }}), in the source code is possible to check that the variable is represented by the option ssl-session-cache-size:
340 // Size of the SSL shared cache between all worker processes.
341 // http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache
342 SSLSessionCacheSize string `json:"ssl-session-cache-size,omitempty"`
So, all we need to do is create a configMap with this parameter and the desired value:
kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
data:
ssl-session-cache-size: "100m"
Note: Adjust the namespace and configMap name for the equivalent of your environment.
Applying this configMap NGINX will reload the configuration and make the changes in the configuration file.
Checking the results:
$ NGINX_POD=$(kubectl get pods -n ingress-nginx -l=app.kubernetes.io/component=controller -ojsonpath='{.items[].metadata.name}')
$ kubectl exec -n ingress-nginx $NGINX_POD -- cat nginx.conf | grep -n ssl_session_cache
223: ssl_session_cache builtin:3000 shared:SSL:100m;
Conclusion
It would work as expected, unfortunately, I can't find a way to add a variable in the builtin:, so we will continue using it hardcoded but at this time it will be a configMap that you can easily make changes if needed.
References:
NGINX INgress Custom template
NGINX Ingress Source Code
The web session timeout for Kubernetes Dashboard is pretty short. I can't see any setting or configuration parameter to change it.
I tried inspecting the container contents with kubectl exec, but there does not seem to be any shell (sh, bash, ash, etc.), so I can't see what web server parameters are configured inside.
I would like to make this timeout longer, to make it easier to keep track of job executions for long periods of time.
How can I proceed?
There are two ways. When you deploy the manifest originally, this can be done by modifying the Container Args to include this directive: --token-ttl=43200 where 43200 is the number of seconds you want to set the automatic timeout to be.
If you want to manipulate the configuration post-deployment, then you can edit the existing deployment which will trigger the pod to be redeployed with the new arguments. To do this run kubectl edit deployment -n kube-system kubernetes-dashboard and add the argument mentioned above to the args section.
EDIT: If you are using V2 of the Dashboard (Still in beta) then you will need to change the namespace in the command from kube-system to kubernetes-dashboard. (Or somewhere else if you customized it)
EDIT2: You can also set token-ttl to 0 to disable timeouts entirely.
In the v2.2.0 version (~year 2021) of the default installation (https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml), they use kubernetes-dashboard as the namespace.
The command would look like this:
kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard
The change would look like this:
# ... content before...
spec:
containers:
- args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --token-ttl=0 # <-- add this with your timeout
image: kubernetesui/dashboard:v2.0.0
# ... content after ...
As TJ Zimmerman suggested: 0 = no-timeout.
If using helm, the token-timeout can be set in the values.yaml like this:
extraArgs:
- --token-ttl=86400
The same as previous answers, but if editing files isn't your bag and you prefer to just run a command, you can patch your (default) dashboard deployment with:
kubectl patch --namespace kubernetes-dashboard deployment \
kubernetes-dashboard --type='json' --patch \
'[{"op": "add", "path": "/spec/template/spec/containers/0/args/2", "value": "--token-ttl=43200" }]'
(adjust 43200 to whatever TTL value you want to set).
I took the CKA exam and I needed to work with Daemonsets for quite a while there. Since it is much faster to do everything with kubectl instead of creating yaml manifests for k8s resources, I was wondering if it is possible to create Daemonset resources using kubectl.
I know that it is NOT possible to create it using regular kubectl create daemonset at least for now. And there is no description of it in the documentation. But maybe there is a way to do that in some different way?
The best thing I could do right now is to create Deployment first like kubectl create deployment and edit it's output manifest. Any options here?
The fastest hack is to create a deployment file using
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1
However, the following command will give a clean daemonset manifest considering that "apps/v1" is the api used for DaemonSet in your cluster
kubectl create deploy nginx --image=nginx --dry-run -o yaml | \
sed '/null\|{}\|replicas/d;/status/,$d;s/Deployment/DaemonSet/g' > nginx-ds.yaml
You have your nginx DaemonSet.
CKA allows access to K8S documentation. So, it should be possible to get a sample YAML for different resources from there. Here is the one for the Daemonset from K8S documentation.
Also, not sure if the certification environment has access to resources in the kube-system namespace. If yes, then use the below command to get a sample yaml for Daemonset.
kubectl get daemonsets kube-flannel-ds-amd64 -o yaml -n=kube-system > daemonset.yaml
It's impossible. At least for Kubernetes 1.12. The only option is to get a sample Daemonset yaml file and go from there.
The fastest way to create
kubectl create deploy nginx --image=nginx --dry-run -o yaml > nginx-ds.yaml
Now replace the line kind: Deployment with kind: DaemonSet in nginx-ds.yaml and remove the line replicas: 1 , strategy {} and status {} as well.
Otherwise it shows error for some required fields like this
error: error validating "nginx-ds.yaml": error validating data: [ValidationError(DaemonSet.spec): unknown field "strategy" in io.k8s.api.apps.v1.DaemonSetSpec, ValidationError(DaemonSet.status): missing required field "currentNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus,ValidationError(DaemonSet.status): missing required field "numberMisscheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "desiredNumberScheduled" in io.k8s.api.apps.v1.DaemonSetStatus, ValidationError(DaemonSet.status): missing required field "numberReady" in io.k8s.api.apps.v1.DaemonSetStatus]; if you choose to ignore these errors, turn validation off with --validate=false
There is no such option to create a DaemonSet using kubectl. But still, you can prepare a Yaml file with basic configuration for a DaemonSet, e.g. daemon-set-basic.yaml, and create it using kubectl create -f daemon-set-basic.yaml
You can edit new DaemonSet using kubectl edit daemonset <name-of-the-daemon-set>. Or modify the Yaml file and apply changes by kubectl apply -f daemon-set-basic.yaml. Note, if you want to update configuration modifying file and using apply command, it is better to use apply instead of create when you create the DaemonSet.
Here is the example of a simple DaemonSet:
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
You could take advantage of Kubernetes architecture to obtain definition of DaemonSet from existing cluster. Have a look at kube-proxy, which is a network component that runs on each node in your cluster.
kube-proxy is deployed as DaemonSet so you can extract its definition with below command.
$ kubectl get ds kube-proxy -n kube-system -o yaml > kube-proxy.ds.yaml
Warning!
By extracting definition of DaemonSet from kube-proxy be aware that:
You will have to do pliantly of clean up!
You will have to change apiVersion from extensions/v1beta1 to apps/v1
I used this by the following commands:
Either create Replicaset or deployment from Kubernetes imperative command
kubectl create deployment <daemonset_name> --image= --dry-run -o yaml > file.txt
Edit the kind and replace DaemonSet, remove replicas and strategy fields into it.
kubectl apply -f file.txt
During CKA examination you are allowed to access Kubernetes Documentation for DaemonSets. You could use the link and get examples of DaemonSet yaml files. However you could use the way you mentioned, change a deployment specification to DaemonSet specification. You need to change the kind to Daemonset, remove strategy, replicas and status fields. That would do.
Using command to deployment create and modifying it, one can create daemonset very quickly.
Below is one line command to create daemonset
kubectl create deployment elasticsearch --namespace=kube-system --image=k8s.gcr.io/fluentd-elasticsearch:1.20 --dry-run -o yaml | grep -v "creationTimestamp\|status" | awk '{gsub(/Deployment/, "DaemonSet"); print }'