How to increase the characters per line when using kubectl exec - kubernetes

first of all any help is appreciated
I want execute a command in a container and I execute:
kubectl exec -ti busybox bash
but if I input about 70 characters in bash and then I get truncated output and line broken anyway lead to unreadable.
is there a way to increase the characters per line when using kubectl exec?
Environment (also see manifests for more detailed info):
Centos 7.0.1406
Kubernetes: 1.2.0
etcd: 2.3.7
flannel: 0.5.3
docker:1.10.3
Thanks a lot for any suggestions.

This will be supported in the upcoming Kubernetes 1.4 release (if you're interested, see its fix).

Related

Why does kubectl exec need a --?

If I run the command
$ kubectl exec pod-name echo Hello World
I get a deprecation error message asking me to include the '--' characters.
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Why was the decision made to require the '--' characters there? It seems unnecessary to me. I understand it's deprecated, I'm just trying to understand the reasoning behind the decision.
According the book "Kubernetes in action" by Marko Luksa:
Why the double dash?
The double dash (--) in the command signals the end of command options for
kubectl. Everything after the double dash is the command that should be executed
inside the pod . Using the double dash isn’t necessary if the command has no
arguments that start with a dash. But in your case, if you don’t use the double dash
there, the -s option would be interpreted as an option for kubectl exec and would
result in the following strange and highly misleading error:
$ kubectl exec kubia-7nog1 curl -s http://10.111.249.153
The connection to the server 10.111.249.153 was refused – did you
specify the right host or port?
This has nothing to do with your service refusing the connection. It’s because
kubectl is not able to connect to an API server at 10.111.249.153 (the -s option
is used to tell kubectl to connect to a different API server than the default).

How can I keep a Pod from crashing so that I can debug it?

In Kubernetes, when a Pod repeatedly crashes and is in CrashLoopBackOff status, it is not possible to shell into the container and poke around to find the problem, due to the fact that containers (unlike VMs) live only as long as the primary process. If I shell into a container and the Pod is restarted, I'm kicked out of the shell.
How can I keep a Pod from crashing so that I can investigate if my primary process is failing to boot properly?
Redefine the command
In development only, a temporary hack to keep a Kubernetes pod from crashing is to redefine it and specify the container's command (corresponding to a Docker ENTRYPOINT) and args to be a command that will not crash. For instance:
containers:
- name: something
image: some-image
# `shell -c` evaluates a string as shell input
command: [ "sh", "-c"]
# loop forever, outputting "yo" every 5 seconds
args: ["while true; do echo 'yo' && sleep 5; done;"]
This allows the container to run and gives you a chance to shell into it, like kubectl exec -it pod/some-pod -- sh, and investigate what may be wrong.
This needs to be undone after debugging so that the container will run the command it's actually meant to run.
Adapted from this blog post.
There are also other methods used for debugging pods that are worth noting in your use case scenario:
If your container has previously crashed, you can access the previous container's crash log with: kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images. kubectl has an alpha command that can create ephemeral containers for debugging beginning with version v1.18. An example for this method can be found here.
in my case I did build using mac m1/silicon. In this case pod crashes and there is no explicit message about this.
The problem was that I a also debugged using docker on the same m1 so could not really see what is wrong.
I would need to build image using docker build --platform linux/amd64.

kubectl logs command does not appear to respect --limit-bytes option

When i issue
kubectl logs MY_POD_NAME --limit-bytes=1
command, the --limit-bytes option is ignored and i get all the pod's logs.
My kubernetes version is 1.15.3
Trying to understand why that would be. When i issue the same command in GKE setup, the --limit-bytes option works as expected. I wonder what might be different in my setup to prevent this option from working correctly. (This is on CentOS).
Update: i tracked down the issue to Docker's --log-driver option.
If the Docker --log-driver is set to 'json-file', then kubectl logs command works fine with --limit-bytes option. However, if the Docker -log-driver is set to 'journald', then kubectl logs command ignores the --limit-bytes option. Seems like a kubectl bug to me.
After executing this command you shoud have seen following error:
error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and examples.
Execute:
$ kubectl logs your_pod_name -c container_name --limit-bytes=1 -n namespace_name
If you set --limit-bytes flag you must know that
--limit-bytes=0: Maximum bytes of logs to return.
Defaults to no limit.=0: Maximum bytes of logs to return. Defaults to no limit.
Documentation of kubectl-logs.
Please let me know if it helps.
Yeah, it should work fine -
Please try this, if you have one container in application -
kubectl -n namespace logs pod_name --limit-bytes=1
If you have multiple containers then please mention like -
kubectl -n namespace logs pod_name -c container_name --limit-bytes=1

Timeout for Kubectl exec

How can I set the timeout for the kubectl exec command ?
The below command does not work
kubectl exec -it pod_name bash --requrest-timeout=0 -n test
You have a typo, try:
kubectl exec -it pod_name bash --request-timeout=0 -n test
See kubectl official documentation about request-timeout
--request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
Note that "0" is already the default.
We ran into this issue standing up an on-prem instance of K8s. The answer in our situation was haproxy.
If you have a load-balancer in front of your K8s API (control-plane), I'd look at a timeout on that as the culprit.
I believe the default for haproxy was 20 seconds so after I changed it to 60m, we never noticed the problem again.

How can I run a Kubernetes pod with the sole purpose of running exec against it?

Please before you comment or answer, this question is about a CLI program, not a service. Apparently 90% of Kubernetes has to do with running services, so there is sparse documentation for CLI programs meant to be part of a pipeline workflow.
I have a command line program that uses stdout for JSON results.
I have a docker image for the command line program.
If I create the container as a Kubernetes Job, than stdout and stderr are mixed and require heuristic scrubbing to get pure JSON out.
The stderr messages are from native libraries outside of my direct control.
Supposedly, if I run kubectl exec against a running pod, I will get the normal stdout/stderr pipes.
Is there a way to just have the pod running without an entrypoint (or some dummy service entrypoint) with the sole purpose of running kubectl exec against it?
Is there a way to just have the pod running without an entrypoint [...]?
A pod consists of one or more containers, each of which has an individual entrypoint. It is certainly possible to run a container with a dummy command, for example, you can build an image with:
CMD sleep inf
This will run a container that will persist until you kill it, and you could happily docker exec into it.
You can apply the same solution to k8s. You could build an image as described above and deploy that in a pod, or you could use an existing image and simply set the command, as in:
spec:
containers:
- name: mycontainer
image: myexistingimage
command: ["sleep", "inf"]
You can use kubectl as docker cli https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/
kubectl run just do the job. There is no need for a workaround.
Aditionally, you can attach I/O and disable automatic restart:
kubectl run -i -t busybox --image=busybox --restart=Never