I want to execute set in a pod, to analyze the environment variables:
kubectl exec my-pod -- set
But I get this error:
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "set": executable file not found in $PATH: unknown
I think, this is a special case, because there's no executable set like there's for example an execute ls.
Remarks
When I open a shell in the pod, it's possible to call set there.
When I call kubectl exec with other commands, for example ls, I get no error.
There are some other questions regarding kubectl exec. But these do not apply to my question, because my problem is about executing set.
set is not a binary but instead a shell command that sets the environment variable.
If you want to set an environment variable before executing a follow up command consider using env
kubectl exec mypod -- env NAME=value123 script01
# or
kubectl exec mypod -- /bin/sh -c 'NAME=value123 script01'
see https://stackoverflow.com/a/55894599/93105 for more information
if you want to set the environment variable for the lifetime of the pod then you probably want to set it in the yaml manifest of the pod itself before creating it.
you can also run set if you first run the shell
kubectl exec mypod -- /bin/sh -c 'set'
Related
If I do
POD=$($KUBECTL get pod -lsvc=app,env=production -o jsonpath="{.items[0].metadata.name}")
kubectl debug -it --image=mpen/tinker "$POD" -- zsh -i
I can get into a shell running inside my pod, but I want access to the filesystem for a container I've called "php". I think this should be at /proc/1/root/app but that directory doesn't exist. For reference, my Dockerfile has:
WORKDIR /app
COPY . .
So all the files should be in the root /app directory.
If I add --target=php then I get permission denied:
❯ cd /proc/1/root
cd: permission denied: /proc/1/root
How do I get access to the files?
Reading through the documentation, using kubectl debug won't give you access to the filesystem in another container.
The simplest option may be to use kubectl exec to start a shell inside an existing container. There are some cases in which this isn't an option (for example, some containers contain only a single binary, and won't have a shell or other common utilities avaiable), but a php container will typically have a complete filesystem.
In this case, you can simply:
kubectl exec -it $POD -- sh
You can replace sh by bash or zsh depending on what shells are available in the existing image.
The linked documentation provides several other debugging options, but all involve working on copies of the pod.
I'm trying to run the following command:
kubectl exec vault-1 -- vault operator raft join -leader-ca-cert=`cat "$VAULT_CACERT"` https://vault-0.vault-internal:8200
The goal here is to get the contents of the cert file at the path stored in $VAULT_CACERT (variable on the pod) and pass that in as the -leader-ca-cert using kubectl. When I run I get cat: '': No such file or directory which seems to indicate is possibly using my local machines env. Connecting to the pod and running the command that way does work.
I've tried a few different commands and I can seem to find a way to achieve what I want through kubectl. Is there a better way to pass this data in somehow?
If the ENV is inside the container, then you should not use backtick in the exec command. Use single quotes to avoid shell expansion on your local machine terminal.
so you can try something like below
kubectl exec vault-1 -- vault operator raft join -leader-ca-cert='cat "$VAULT_CACERT"' https://vault-0.vault-internal:8200
I tested out something like below and it seems working for me
kubectl exec demo -- sh -c 'cat "$FILE_PATH"'
I am trying to copy files from the pod to local using following command:
kubectl cp /namespace/pod_name:/path/in/pod /path/in/local
But the command terminates with exit code 126 and copy doesn't take place.
Similarly while trying from local to pod using following command:
kubectl cp /path/in/local /namespace/pod_name:/path/in/pod
It throws the following error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "tar": executable file not found in $PATH: unknown
Please help through this.
kubectl cp is actually a very small wrapper around kubectl exec whatever tar c | tar x. A side effect of this is that you need a working tar executable in the target container, which you do not appear to have.
In general kubectl cp is best avoided, it's usually only good for weird debugging stuff.
kubectl cp requires the tar to be present in your container, as the help says:
!!!Important Note!!!
Requires that the 'tar' binary is present in your container
image. If 'tar' is not present, 'kubectl cp' will fail.
Make sure your container contains the tar binary in its $PATH
An alternative way to copy a file from local filesystem into a container:
cat [local file path] | kubectl exec -i -n [namespace] [pod] -c [container] "--" sh -c "cat > [remote file path]"
Useful command to copy the file from pod to local
kubectl exec -n <namespace> <pod> -- cat <filename with path> > <filename>
For me the cat worked like this:
cat <file name> | kubectl exec -i <pod-id> -- sh -c "cat > <filename>"
Example:
cat file.json | kubectl exec -i server-77b7976cc7-x25s8 -- sh -c "cat > /tmp/file.json"
Didn't need to specify namespace since I run the command from a specific project, and since we have one container, didn't need to specify it
I would like to execute a command in a container (let it be ls) then read the exit code with echo $?
kubectl exec -ti mypod -- bash -c "ls; echo $?" does not work because it returns the exit code of my current shell not the one of the container.
So I tried to use eval on a env varible I defined in my manifest :
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- container2
image: varunuppal/nonrootsudo
env:
- name: resultCmd
value: 'echo $?'
then kubectl exec -ti mypod -- bash -c "ls;eval $resultCmd" but the eval command does not return anything.
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
Note that I can run these two commands within the container
kubectl exec -ti mypod bash
#ls;eval $resultCmd
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
**0**
How can I make it work?
Thanks in advance,
This is happening because you use double quotes instead of single ones.
Single quotes won't substitute anything, but double quotes will.
From the bash documentation:
3.1.2.2 Single
Quotes
Enclosing characters in single quotes (') preserves the literal
value of each character within the quotes. A single quote may not
occur between single quotes, even when preceded by a backslash.
To summarize, this is how your command should look like:
kubectl exec -ti firstpod -- bash -c 'ls; echo $?'
Using the POSIX shell eval command is wrong 99.999% of the time. Even if you ignore the presence of Kubernetes in this question. The problem in your question is that your kubectl command is expanding the definition of $resultCmd in the shell you ran the kubectl command. Specifically due to your use of double-quotes. That interactive shell has no knowledge of the definition of $resultCmd in your "manifest" file. So that shell replaces $resultCmd with nothing.
Thanks Kurtis Rader and Thomas for your answers.
It also works when I precede the $? with a backslash :
kubectl exec -ti firstpod -- bash -c "ls; echo \$?"
I have created a job in bamboo and created a ssh-task to run on my server. My server has already installed kubectl and below command executed successfully there.
echo `kubectl get namespace`
But while running command through job, its showing below error:
bash: line 5: kubectl: command not found
Please be sure that kubectl binary is in PATH of the user context, that your job is running in.
Otherwise you should use absolute path of kubectl executable, e.g. /usr/bin/kubectl)
Indenity the location of kubectl executable: which kubectl
Move it from its current location to location included in PATH, e.g. "sudo mv ./kubectl /usr/local/bin/kubectl"