The web session timeout for Kubernetes Dashboard is pretty short. I can't see any setting or configuration parameter to change it.
I tried inspecting the container contents with kubectl exec, but there does not seem to be any shell (sh, bash, ash, etc.), so I can't see what web server parameters are configured inside.
I would like to make this timeout longer, to make it easier to keep track of job executions for long periods of time.
How can I proceed?
There are two ways. When you deploy the manifest originally, this can be done by modifying the Container Args to include this directive: --token-ttl=43200 where 43200 is the number of seconds you want to set the automatic timeout to be.
If you want to manipulate the configuration post-deployment, then you can edit the existing deployment which will trigger the pod to be redeployed with the new arguments. To do this run kubectl edit deployment -n kube-system kubernetes-dashboard and add the argument mentioned above to the args section.
EDIT: If you are using V2 of the Dashboard (Still in beta) then you will need to change the namespace in the command from kube-system to kubernetes-dashboard. (Or somewhere else if you customized it)
EDIT2: You can also set token-ttl to 0 to disable timeouts entirely.
In the v2.2.0 version (~year 2021) of the default installation (https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml), they use kubernetes-dashboard as the namespace.
The command would look like this:
kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard
The change would look like this:
# ... content before...
spec:
containers:
- args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --token-ttl=0 # <-- add this with your timeout
image: kubernetesui/dashboard:v2.0.0
# ... content after ...
As TJ Zimmerman suggested: 0 = no-timeout.
If using helm, the token-timeout can be set in the values.yaml like this:
extraArgs:
- --token-ttl=86400
The same as previous answers, but if editing files isn't your bag and you prefer to just run a command, you can patch your (default) dashboard deployment with:
kubectl patch --namespace kubernetes-dashboard deployment \
kubernetes-dashboard --type='json' --patch \
'[{"op": "add", "path": "/spec/template/spec/containers/0/args/2", "value": "--token-ttl=43200" }]'
(adjust 43200 to whatever TTL value you want to set).
Related
Very new to Kubernetes. In the past I've used kubectl/Kustomize to deploy pods/services using the same repetitive pattern:
On the file system, in my project, I'll have two YAML files such as kustomization.yml and my-app-service.yml that look something like:
kustomization.yml
resources:
- my-app-service.yml
my-app-service.yml
apiVersion: v1
kind: Service
metadata:
name: my-app-service
... etc.
From the command-line, I'll run something like the following:
kubectl apply -k /path/to/the/kustomization.yml
And if all goes well -- boom -- I've got the service running on my cluster. Neato.
I am now trying to troubleshoot something and want to deploy a single container of the google/cloud-sdk image on my cluster, so that I can SSH into it and do some network/authorization-related testing.
The idea is:
Deploy it
SSH in, do my testing
Delete the container/clean it up
I believe K8s doesn't deal with containers and so I'll probably need to deploy a 1-container pod or service (not clear on the difference there), but you get the idea.
I could just create 2 YAML files and run the kubectl apply ... on them like I do for everything else, but that feels a bit heavy-handed. I'm wondering if there is an easier way to do this all from the command-line without having to create YAML files each time I want to do a test like this. Is there?
You can create a pod running a container with the given image with this command:
kubectl run my-debug-pod --image=google/cloud-sdk
And if you want to create a Deployment without writing the Yaml, you can do:
kubectl create deployment my-app --image=my-container-image
You could also just generate the Yaml and save it to file, and then use kubectl apply -f deployment.yaml, generate it with:
kubectl create deployment my-app --image=my-container-image \
--dry-run=client -o yaml > deployment.yaml
I have the following manifest that created the running pod named 'test'
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
app: blue
spec:
containers:
- name: funskies
image: busybox
command: ["/bin/sh", "-c", "echo 'Hello World'"]
I want to update the pod to include the additional command
apiVersion: v1
kind: Pod
metadata:
name: hello-world
labels:
app: blue
spec:
containers:
restartPolicy: Never
- name: funskies
image: busybox
command: ["/bin/sh", "-c", "echo 'Hello World' > /home/my_user/logging.txt"]
What I tried
kubectl edit pod test
What resulted
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
# pods "test" was not valid:
# * spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`...
Other things I tried:
Updated the manifest and then ran apply - same issue
kubectl apply -f test.yaml
Question: What is the proper way to update a running pod?
You can't modify most properties of a Pod. Typically you don't want to directly create Pods; use a higher-level controller like a Deployment.
The Kubernetes documentation for a PodSpec notes (emphasis mine):
containers: List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.
In all cases, no matter what, a container runs a single command, and if you want to change what that command is, you need to delete and recreate the container. In Kubernetes this always means deleting and recreating the containing Pod. Usually you shouldn't use bare Pods, but if you do, you can create a new Pod with the new command and delete the old one. Deleting Pods is extremely routine and all kinds of ordinary things cause it to happen (updating Deployments, a HorizontalPodAutoscaler scaling down, ...).
If you have a Deployment instead of a bare Pod, you can freely change the template: for the Pods it creates. This includes changing their command:. This will result in the Deployment creating a new Pod with the new command, and once it's running, deleting the old Pod.
The sorts of very-short-lived single-command containers you show in the question aren't necessarily well-suited to running in Kubernetes. If the Pod isn't going to stay running and serve requests, a Job could be a better match; but a Job believes it will only be run once, and if you change the pod spec for a completed Job I don't think it will launch a new Pod. You'd need to create a new Job for this case.
I am not sure what the whole requirement is.
but you can exec to the pod and update the details
$ kubectl exec <pod-name> -it -n <namespace> -- <command to execute>
like,
$ kubectl exec pod/hello-world-xxxx-xx -it -- /bin/bash
if tty support shell, use "/bin/sh" to update the content or command.
Editing the running pod, will not retain the changes in manifest file. so in that case you have to run a new pod with the changes.
I am using Helm v3.3.0, with a Kubernetes 1.16.
The cluster has the Kubernetes Service Catalog installed, so external services implementing the Open Service Broker API spec can be instantiated as K8S resources - as ServiceInstances and ServiceBindings.
ServiceBindings reflect as K8S Secrets and contain the binding information of the created external service. These secrets are usually mapped into the Docker containers as environment variables or volumes in a K8S Deployment.
Now I am using Helm to deploy my Kubernetes resources, and I read here that...
The [Helm] install order of Kubernetes types is given by the enumeration InstallOrder in kind_sorter.go
In that file, the order does neither mention ServiceInstance nor ServiceBinding as resources, and that would mean that Helm installs these resource types after it has installed any of its InstallOrder list - in particular Deployments. That seems to match the output of helm install --dry-run --debug run on my chart, where the order indicates that the K8S Service Catalog resources are applied last.
Question: What I cannot understand is, why my Deployment does not fail to install with Helm.
After all my Deployment resource seems to be deployed before the ServiceBinding is. And it is the Secret generated out of the ServiceBinding that my Deployment references. I would expect it to fail, since the Secret is not there yet, when the Deployment is getting installed. But that is not the case.
Is that just a timing glitch / lucky coincidence, or is this something I can rely on, and why?
Thanks!
As said in the comment I posted:
In fact your Deployment is failing at the start with Status: CreateContainerConfigError. Your Deployment is created before Secret from the ServiceBinding. It's only working as it was recreated when the Secret from ServiceBinding was available.
I wanted to give more insight with example of why the Deployment didn't fail.
What is happening (simplified in order):
Deployment -> created and spawned a Pod
Pod -> failing pod with status: CreateContainerConfigError by lack of Secret
ServiceBinding -> created Secret in a background
Pod gets the required Secret and starts
Previously mentioned InstallOrder will leave ServiceInstace and ServiceBinding as last by comment on line 147.
Example
Assuming that:
There is a working Kubernetes cluster
Helm3 installed and ready to use
Following guides:
Kubernetes.io: Instal Service Catalog using Helm
Magalix.com: Blog: Kubernetes Service Catalog
There is a Helm chart with following files in templates/ directory:
ServiceInstance
ServiceBinding
Deployment
Files:
ServiceInstance.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceInstance
metadata:
name: example-instance
spec:
clusterServiceClassExternalName: redis
clusterServicePlanExternalName: 5-0-4
ServiceBinding.yaml:
apiVersion: servicecatalog.k8s.io/v1beta1
kind: ServiceBinding
metadata:
name: example-binding
spec:
instanceRef:
name: example-instance
Deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubuntu
spec:
selector:
matchLabels:
app: ubuntu
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
# part below responsible for getting secret as env variable
env:
- name: DATA
valueFrom:
secretKeyRef:
name: example-binding
key: host
Applying above resources to check what is happening can be done in 2 ways:
First method is to use timestamp from $ kubectl get RESOURCE -o yaml
Second method is to use $ kubectl get RESOURCE --watch-only=true
First method
As said previously the Pod from the Deployment couldn't start as Secret was not available when the Pod tried to spawn. After the Secret was available to use, the Pod started.
The statuses this Pod had were the following:
Pending
ContainerCreating
CreateContainerConfigError
Running
This is a table with timestamps of Pod and Secret:
| Pod | Secret |
|-------------------------------------------|-------------------------------------------|
| creationTimestamp: "2020-08-23T19:54:47Z" | - |
| - | creationTimestamp: "2020-08-23T19:54:55Z" |
| startedAt: "2020-08-23T19:55:08Z" | - |
You can get this timestamp by invoking below commands:
$ kubectl get pod pod_name -n namespace -o yaml
$ kubectl get secret secret_name -n namespace -o yaml
You can also get get additional information with:
$ kubectl get event -n namespace
$ kubectl describe pod pod_name -n namespace
Second method
This method requires preparation before running Helm chart. Open another terminal window (for this particular case 2) and run:
$ kubectl get pod -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
$ kubectl get secret -n namespace --watch-only | while read line ; do echo -e "$(gdate +"%H:%M:%S:%N")\t $line" ; done
After that apply your Helm chart.
Disclaimer!
Above commands will watch for changes in resources and display them with a timestamp from the OS. Please remember that this command is only for example purposes.
The output for Pod:
21:54:47:534823000 NAME READY STATUS RESTARTS AGE
21:54:47:542107000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:553799000 ubuntu-65976bb789-l48wz 0/1 Pending 0 0s
21:54:47:655593000 ubuntu-65976bb789-l48wz 0/1 ContainerCreating 0 0s
-> 21:54:52:001347000 ubuntu-65976bb789-l48wz 0/1 CreateContainerConfigError 0 4s
21:55:09:205265000 ubuntu-65976bb789-l48wz 1/1 Running 0 22s
The output for Secret:
21:54:47:385714000 NAME TYPE DATA AGE
21:54:47:393145000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:47:719864000 sh.helm.release.v1.example.v1 helm.sh/release.v1 1 0s
21:54:51:182609000 understood-squid-redis Opaque 1 0s
21:54:52:001031000 understood-squid-redis Opaque 1 0s
-> 21:54:55:686461000 example-binding Opaque 6 0s
Additional resources:
Stackoverflow.com: Answer: Helm install in certain order
Alibabacloud.com: Helm charts and templates hooks and tests part 3
So to answer my own question (and thanks to #dawid-kruk and the folks on Service Catalog Sig on Slack):
In fact, the initial start of my Pods (the ones referencing the Secret created out of the ServiceBinding) fails! It fails because the Secret is actually not there the moment K8S tries to start the pods.
Kubernetes has a self-healing mechanism, in the sense that it tries (and retries) to reach the target state of the cluster as described by the various deployed resources.
By Kubernetes retrying to get the pods running, eventually (when the Secret is finally there) all conditions will be satisfied to make the pods start up nicely. Therefore, eventually, evth. is running as it should.
How could this be streamlined? One possibility would be for Helm to include the custom resources ServiceBinding and ServiceInstance into its ordered list of installable resources and install them early in the installation phase.
But even without that, Kubernetes actually deals with it just fine. The order of installation (in this case) really does not matter. And that is a good thing!
According to the ingress nginx controller doc, it remind the object will use the ingress-nginx namespace, and can change to other namespace with --watch-namespace tag.
But when I using
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx0.28.0/deploy/static/provider/aws/service-l7.yaml --watch-namespace=default
It report
Error: unknown flag: --watch-namespace
See 'kubectl apply --help' for usage.
You are messing up with one flag with others. By default following command will deploy the controller in ingress-nginx namespace. But you want it to be in some other namespace like default. To do so, you need to pass kubectl's flag like -n or --namespace.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx0.28.0/deploy/static/provider/aws/service-l7.yaml --namespace default
NB:
--watch-namespace is a flag of nginx-ingress-controller. It is used while running the binary inside the container. It needs to be set from deployment.spec.contianers[].args[]. It is used to bind the controller's watch in a single k8s namespace(by default it watches objects of all namespaces).
You need to set --watch-namespace in the args section of nginx ingress controller deployment yaml
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/$(NGINX_CONFIGMAP_NAME)
- --tcp-services-configmap=$(POD_NAMESPACE)/$(TCP_CONFIGMAP_NAME)
- --udp-services-configmap=$(POD_NAMESPACE)/$(UDP_CONFIGMAP_NAME)
- --publish-service=$(POD_NAMESPACE)/$(SERVICE_NAME)
- --annotations-prefix=nginx.ingress.kubernetes.io
- --watch-namespace=namespace
https://github.com/kubernetes/ingress-nginx/blob/master/deploy/cloud-generic/deployment.yaml
I have a container that I want to run on Kubernetes Let's say image1
when I run kubectl apply -f somePod.yml (which runs the image1) how can I start the image with the user that runned the command kubectl?
It's not possible by design. Please find explanation below:
In the most cases Jobs create Pods, so I use Pods in my explanation.
In case of Jobs it means just a bit different YAML file.
$ kubectl explain job.spec.
$ kubectl explain job.spec.template.spec
Users run kubectl using user accounts.
Pods are runing using service accounts. There is no way to run pod "from user account".
Note: in recent versions spec.ServiceAccount was replaced by spec.serviceAccountName
However, you can use user account credentials by running kubectl inside a Pod's container or making curl requests to Kubernetes api-server from inside a pod container.
Using Secrets is the most convenient way to do that.
What else you can do differentiate users in the cluster:
create a namespace for each user
limit user permission to specific namespace
create default service account in that namespace.
This way if the user creates a Pod without specifying spec.ServiceAccountName, by default it will use default service-account from Pod's namespace.
You can even set for the default service account the same permissions as for the user account. The only difference would be that service accounts exist in the specific namespace.
If you need to use the same namespace for all users, you can use helm charts to set the correct serviceAccountName for each user ( imagine you have service-accounts with the same name as users ) by using --set command line arguments as follows:
$ cat testchart/templates/job.yaml
...
serviceAccountName: {{ .Values.saname }}
...
$ export SANAME=$(kubectl config view --minify -o jsonpath='{.users[0].name}')
$ helm template ./testchart --set saname=$SANAME
---
# Source: testchart/templates/job.yaml
...
serviceAccountName: kubernetes-admin
...
You can also specify namespace for each user in the same way.
I am still not sure if I good understood your question.
However, kubectl doesn't have an option to pass user or service account when creating jobs:
kubectl create job --help
Create a job with the specified name.
Examples:
# Create a job
kubectl create job my-job --image=busybox
# Create a job with command
kubectl create job my-job --image=busybox -- date
# Create a job from a CronJob named "a-cronjob"
kubectl create job test-job --from=cronjob/a-cronjob
Options:
--allow-missing-template-keys=true: If true, ignore any errors in templates when a field or
map key is missing in the template. Only applies to golang and jsonpath output formats.
--dry-run=false: If true, only print the object that would be sent, without sending it.
--from='': The name of the resource to create a Job from (only cronjob is supported).
--image='': Image name to run.
-o, --output='': Output format. One of:
json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
--save-config=false: If true, the configuration of current object will be saved in its
annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to
perform kubectl apply on this object in the future.
--template='': Template string or path to template file to use when -o=go-template,
-o=go-template-file. The template format is golang templates
[http://golang.org/pkg/text/template/#pkg-overview].
--validate=true: If true, use a schema to validate the input before sending it
Usage:
kubectl create job NAME --image=image [--from=cronjob/name] -- [COMMAND] [args...] [flags]
[options]
Use "kubectl options" for a list of global command-line options (applies to all commands).
You can specify many factors inside your YAML definition. For example you could create ServiceAccount or specify runAsUser in a pod configuration. However, this requires to have a definition file instead of run-level with kubectl.
Here you can find how to do it with runAsUser parameter.
You could also consider using ServiceAccount. Here you have article which might help you. However you would need to create specific ServiceAccount
It would look similar like:
apiVersion: v1
kind: Pod
metadata:
name: pod-demo-sa
spec:
serviceAccountName: demo-sa
containers:
— name: alpine
image: alpine:3.9
command:
— "sleep"
— "10000"
If this would be for some labs or practice you could also think about creating customized docker image using Dockerfile.
Unfortunately previous options are hardcoded. Other solution would need a specific scritp and many pipelines.
In addition, as you mentioned in title, to pass some values to configuration you can use ConfigMap.