Timeout for Kubectl exec - kubernetes

How can I set the timeout for the kubectl exec command ?
The below command does not work
kubectl exec -it pod_name bash --requrest-timeout=0 -n test

You have a typo, try:
kubectl exec -it pod_name bash --request-timeout=0 -n test
See kubectl official documentation about request-timeout
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
Note that "0" is already the default.

We ran into this issue standing up an on-prem instance of K8s. The answer in our situation was haproxy.
If you have a load-balancer in front of your K8s API (control-plane), I'd look at a timeout on that as the culprit.
I believe the default for haproxy was 20 seconds so after I changed it to 60m, we never noticed the problem again.

Related

Why does kubectl exec need a --?

If I run the command
$ kubectl exec pod-name echo Hello World
I get a deprecation error message asking me to include the '--' characters.
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Why was the decision made to require the '--' characters there? It seems unnecessary to me. I understand it's deprecated, I'm just trying to understand the reasoning behind the decision.
According the book "Kubernetes in action" by Marko Luksa:
Why the double dash?
The double dash (--) in the command signals the end of command options for
kubectl. Everything after the double dash is the command that should be executed
inside the pod . Using the double dash isn’t necessary if the command has no
arguments that start with a dash. But in your case, if you don’t use the double dash
there, the -s option would be interpreted as an option for kubectl exec and would
result in the following strange and highly misleading error:
$ kubectl exec kubia-7nog1 curl -s http://10.111.249.153
The connection to the server 10.111.249.153 was refused – did you
specify the right host or port?
This has nothing to do with your service refusing the connection. It’s because
kubectl is not able to connect to an API server at 10.111.249.153 (the -s option
is used to tell kubectl to connect to a different API server than the default).

kubectl logs command does not appear to respect --limit-bytes option

When i issue
kubectl logs MY_POD_NAME --limit-bytes=1
command, the --limit-bytes option is ignored and i get all the pod's logs.
My kubernetes version is 1.15.3
Trying to understand why that would be. When i issue the same command in GKE setup, the --limit-bytes option works as expected. I wonder what might be different in my setup to prevent this option from working correctly. (This is on CentOS).
Update: i tracked down the issue to Docker's --log-driver option.
If the Docker --log-driver is set to 'json-file', then kubectl logs command works fine with --limit-bytes option. However, if the Docker -log-driver is set to 'journald', then kubectl logs command ignores the --limit-bytes option. Seems like a kubectl bug to me.
After executing this command you shoud have seen following error:
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples.
Execute:
$ kubectl logs your_pod_name -c container_name --limit-bytes=1 -n namespace_name
If you set --limit-bytes flag you must know that
--limit-bytes=0: Maximum bytes of logs to return.
Defaults to no limit.=0: Maximum bytes of logs to return. Defaults to no limit.
Documentation of kubectl-logs.
Please let me know if it helps.
Yeah, it should work fine -
Please try this, if you have one container in application -
kubectl -n namespace logs pod_name --limit-bytes=1
If you have multiple containers then please mention like -
kubectl -n namespace logs pod_name -c container_name --limit-bytes=1

How to tail all logs in a kubernetes cluster

I tried this command:
kubectl logs --tail
I got this error/help output:
Error: flag needs an argument: --tail
Aliases:
logs, log
Examples:
# Return snapshot logs from pod nginx with only one container
kubectl logs nginx
# Return snapshot logs for the pods defined by label app=nginx
kubectl logs -lapp=nginx
# Return snapshot of previous terminated ruby container logs from pod web-1
kubectl logs -p -c ruby web-1
# Begin streaming the logs of the ruby container in pod web-1
kubectl logs -f -c ruby web-1
# Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
# Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
ummm I just want to see all the logs, isn't this a common thing to want to do? How can I tail all the logs for a cluster?
kail from the top answer is Linux and macOS only, but Stern also works on Windows.
It can do pod matching based on e.g. a regex match for the name, and then can follow the logs.
To follow ALL pods without printing any prior logs from the default namespace you would run e.g.:
stern ".*" --tail 0
For absolutely everything, incl. internal stuff happening in kube-system namespace:
stern ".*" --all-namespaces --tail 0
Alternatively you could e.g. follow all login-.* containers and get some context with
stern "login-.*" --tail 25
If you don't mind using a third party tool, kail does exactly what you're describing.
Streams logs from all containers of all matched pods. [...] With no arguments, kail matches all pods in the cluster.
The only thing you can do is to get logs of multiple pods using label selectors like this:
kubectl logs -f -l app=nginx -l app=php
For getting all logs of the entire cluster you have to setup centralized log collection like Elasticsearch, Fluentd and Kibana. Simplest way to do it is installation using Helm charts like described here: https://linux-admin.tech/kubernetes/logging/2018/10/24/elk-stack-installation.html
I would recommend using a nice bash script named kubetail.
You can just download the bash script and add it to in your project and run for example:
$ ./some-tools-directory/kubetail.sh --selector app=user --since 10m
To see all pods with the label app=user.
Notice the nice display of colors per pod:
(*) Run ./tools/kubetail.sh -h to see some nice execution options.
kubetail.sh <search term> [-h] [-c] [-n] [-t] [-l] [-d] [-p] [-s] [-b] [-k] [-v] [-r] [-i] -- tail multiple Kubernetes pod logs at the same time
where:
-h, --help Show this help text
-c, --container The name of the container to tail in the pod (if multiple containers are defined in the pod).
Defaults to all containers in the pod. Can be used multiple times.
-t, --context The k8s context. ex. int1-context. Relies on ~/.kube/config for the contexts.
-l, --selector Label selector. If used the pod name is ignored.
-n, --namespace The Kubernetes namespace where the pods are located (defaults to "default")
-f, --follow Specify if the logs should be streamed. (true|false) Defaults to true.
-d, --dry-run Print the names of the matched pods and containers, then exit.
-p, --previous Return logs for the previous instances of the pods, if available. (true|false) Defaults to false.
-s, --since Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to 10s.
-b, --line-buffered This flags indicates to use line-buffered. Defaults to false.
-e, --regex The type of name matching to use (regex|substring)
-j, --jq If your output is json - use this jq-selector to parse it.
example: --jq ".logger + \" \" + .message"
-k, --colored-output Use colored output (pod|line|false).
pod = only color pod name, line = color entire line, false = don't use any colors.
Defaults to line.
-z, --skip-colors Comma-separated list of colors to not use in output
If you have green foreground on black, this will skip dark grey and some greens -z 2,8,10
Defaults to: 7,8
--timestamps Show timestamps for each log line
--tail Lines of recent log file to display. Defaults to -1, showing all log lines.
-v, --version Prints the kubetail version
-r, --cluster The name of the kubeconfig cluster to use.
-i, --show-color-index Show the color index before the pod name prefix that is shown before each log line.
Normally only the pod name is added as a prefix before each line, for example "[app-5b7ff6cbcd-bjv8n]",
but if "show-color-index" is true then color index is added as well: "[1:app-5b7ff6cbcd-bjv8n]".
This is useful if you have color blindness or if you want to know which colors to exclude (see "--skip-colors").
Defaults to false.
examples:
kubetail.sh my-pod-v1
kubetail.sh my-pod-v1 -c my-container
kubetail.sh my-pod-v1 -t int1-context -c my-container
kubetail.sh '(service|consumer|thing)' -e regex
kubetail.sh -l service=my-service
kubetail.sh --selector service=my-service --since 10m
kubetail.sh --tail 1
I have hardly ever seen anyone pulling all logs from entire clusters, because you usually either need logs to manually search for certain issues or follow (-f) a routine, or collect audit information, or stream all logs to a log sink to have them prepared for monitoring (e.g. prometheus).
However, if there's a need to fetch all logs, using the --tail option is not what you're looking for (tail only shows the last number of lines of a certain log source and avoids spilling the entire log history of a single log source to your terminal).
For kubernetes, you can write a simple script in a language of your choice (bash, Python, whatever) to kubectl get all --show-all --all-namespaces and iterate over the pods to run kubectl -n <namespace> logs <pod>; but be aware that there might be multiple containers in a pod with individual logs each, and also logs on the cluster nodes themselves, state changes in the deployments, extra meta information that changes, volume provisioning, and heaps more.
That's probably the reason why it's quite uncommon to pull all logs from an entire cluster and thus there's no easy (shortcut) way to do so.
# assumes you have pre-set the KUBECONFIG or using the default one ...
do_check_k8s_logs(){
# set the desired namespaces here vvvv
for namespace in `echo apiv2 default kube-system`; do
while read -r pod ; do
while read -r container ; do
kubectl -n $namespace logs $pod $container | tail -n 2000
done < <(kubectl -n $namespace get pods -o json | jq -r ".items[]|select(.metadata.name | contains ( \"$pod\"))| .status.containerStatuses[].name") ;
done < <(kubectl -n $namespace get pods -o json | jq -r '.items[].metadata.name') \
| tee -a ~/Desktop/k8s-$namespace-logs.`date "+%Y%m%d_%H%M%S"`.log
done
}
do_check_k8s_logs
For your applications data, you probably just want to tail all the pods in the cluster.
But if you want logs for the control-plane of a cluster - you can use:
https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-eks-now-delivers-kubernetes-control-plane-logs-to-amazon-/

Kubernetes identify if deployment or scale up

I do some action on the start of the pod after deployment. But dont want to do it for scale up . Is there a way to identify when a pod is create if it is a new deployment or a scale up/recreate event
So you want to run some db script on the first available pod after the deployment.
Will it work for you if you follow the following steps during the deployment creation:
$ kubectl apply -f app-deployment.yml
// This will give some time for the pod to start.
$ WAIT_BEFORE_DATABASE_SETUP="${WAIT_BEFORE_DATABASE_SETUP-120}"
$ sleep $WAIT_BEFORE_DATABASE_SETUP
// Pick any one pod which is in Running State
$ APP_POD_NAME=$(kubectl get pods --field-selector=status.phase=Running -o=custom-columns=NAME:.metadata.name | grep <deployment-name> | head -1)
$ kubectl exec -it $APP_POD_NAME -- bash -c "/scripts/run_db_updates.sh"
One possible solution would be to update a flag in your DB and have your script check against the value of that flag.

How do you link Pachyderm with the correct Kubernetes context?

I have more than one Kubernetes context. When I change contexts, I have been using kill -9 to kill the port-forward in order to redo the pachtctl port-forward & command. I wonder if this is the right way of doing it.
In more detail:
I start off being in a Kubernetes context, we'll call it context_x. I then want to change context to my local context, called minikube. I also want to see my repos for this minikube context, but when I use pachctl list-repo, it still shows context_x's Pachyderm repos. When I do pachctl port-forward, I then get an error message about the address being already in use. So I have to ps -a, then kill -9 on those port forward processes, and then do pachctl port-forward command again.
An example of what I've been doing:
$ kubectl config use-context minikube
$ pachctl list-repo #doesn't show minikube context's repos
$ pachctl port-forward &
...several error messages along the lines of:
Unable to create listener: Error listen tcp4 127.0.0.1:30650: bind: address already in use
$ ps -a | grep forward
33964 ttys002 0:00.51 kubectl port-forward dash-12345678-abcde 38080:8080
33965 ttys002 0:00.51 kubectl port-forward dash-12345679-abcde 38081:8081
37245 ttys002 0:00.12 pachctl port-forward &
37260 ttys002 0:00.20 kubectl port-forward pachd-4212312322-abcde 30650:650
$ kill -9 37260
$ pachctl port-forward & #works as expected now
Also, kill -9 on the pachctl port-forward process 37245 doesn't work, it seems like I have to kill -9 on the kubectl port-forward
You can specify the port if you want, as a different one using -p flag as mentioned in docs Is there a reason of not doing it?
Also starting processes in background and then sending it a SIGKILL causes the resources to be unallocated properly so when you try to join again you might see it giving errors since it cannot allocate the same port again. So try running it without & at the end.
So whenever you change the context all you need to do is CTRL + C and start it again, this will release the resources properly and gain thema gain.
Just wanted to update this answer for anyone who finds it—pachctl now supports contexts, and a Pachyderm context includes a reference to its associated kubectl context. When you switch to a new pachctl context, pachctl will now use the associated kubectl context automatically (you'll still need to switch contexts in kubectl)