I have a docker image with below entrypoint.
ENTRYPOINT ["sh", "-c", "python3 -m myapp ${*}"]
I tried to pass arguments to this image in my kubernetes deployments so that ${*} is replaced with them, but after checking the logs it seem that the first argument was ignored.
I tried to reproduce the result regardless of image, and applied below pod:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: postgres # or any image you may like
command: ["bash -c /bin/echo ${*}"]
args:
- sth
- serve
- arg
when I check the logs, I just see serve arg, and sth is completely ignored.
Any idea on what went wrong or what should I do to pass arguments to exec-style entrypoints instead?
First, your command has quoting problems -- you are effectively running bash -c echo.
Second, you need to closely read the documentation for the -c option (emphasis mine):
If the -c option is present, then commands are read from
the first non-option argument command_string. If there
are arguments after the command_string, the first argument
is assigned to $0 and any remaining arguments are assigned
to the positional parameters. The assignment to $0 sets
the name of the shell, which is used in warning and error
messages.
So you want:
command: ["bash", "-c", "echo ${*}", "bash"]
Given your pod definition, this would set $0 to bash, and then $1 to sth, $2 to serve, and $3 to arg.
There are some subtleties around using sh -c here. For the examples you show, it's not necessary. The important things to remember are that the ENTRYPOINT and CMD are combined together into a single command (or, in Kubernetes, command: and args:), and that sh -c generally takes only a single string argument and acts on it.
The examples you show don't use any shell functionality and you can break the commands into their constituent words as YAML list items.
command:
- /bin/echo
- sth
- serve
- arg
For the Dockerfile case, there is a pattern of using ENTRYPOINT to specify a command and CMD for its arguments, which parallels Kubernetes's syntax here. For this to work well, I'd avoid sh -c (including the implicit sh -c from the ENTRYPOINT shell form); just provide the first set of words in JSON-array form.
ENTRYPOINT ["python", "-m", "myapp"]
# don't override command:, the image's ENTRYPOINT is right, but do add
args:
- foo
- bar
- baz
(If your entrypoint setup is complex enough to require shell operators, it's typically easier to write and debug to move it into a dedicated script and make that script be the ENTRYPOINT or CMD, rather than trying to figure out sh -c semantics and YAML quoting.)
Related
I would like to start a container with bash only. In other words, a bash that does not stop, i guess an interactive bash or shell .
So far when i put sommething like ["bash"] or ["bin/bash"] or simply bash, the container run and stop. Is there a way to start a bash that run continuously ?
EDIT1
So far the only way that works for me is to write:
command:
- tail
- -f
- /dev/null
Edit2
My use case here is that i want to build a docker image simply to develop in it. So that image has all the tool i need to work.
Hence I wonder how such container should be start. I don't want to run any of the dev tool at start. I simply want the container to be available ready for someome to run the interactive shell at any time.
you can try the sleep command in while loop.
command: ["/bin/sh"]
args: ["-c", "while true; do sleep 10;done"]
You create a container with
command: ["cat"]
tty: true
stdin: true
That way it would consume less cpu and memory than bash
l want to launch a container with non-root user, but l cannot modify the origin Dockerfile, Or l know l can do something like Run useradd xx then User xx in Dockerfile to achieve that.
What l am doing now is modifying the yaml file like the following:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: xxx
command:
- /bin/sh
- -c
- |
useradd xx -s /bin/sh;
su -l xx; // this line is not working
sleep 1000000;
when l exec into the pod, the default is still the root user, anyone can help with that? Thanks in advance!
You need to use security context as like below
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
Reference: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podsecuritycontext-v1-core
EDIT:
If you wanted to change the user in you container than, you can add extra layer of dockerfile, check below
Add dockerfile layer,
FROM <your_image>
RUN adduser newuser
USER newuser
:
:
Now use above custom image in your kubernetes.
+1 to dahiya_boy's answer however I'd like to add also my 3 cents to what was already said.
I've reproduced your case using popular nginx image. I also modified a bit commands from your example so that home directory for the user xxx is created as well as some other commands for debugging purpose.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: nginx
command:
- /bin/sh
- -c
- |
useradd xxx -ms /bin/bash;
su xxx && echo $?;
whoami;
sleep 1000000;
After successfully applying the above yaml we can run:
$ kubectl logs my-pod
0
root
As you can see the exit status of the echo $? command is 0 which means that previous command in fact ran successfully. Even more: the construction with && implies that second command is run if and only if the first command completed successfully (with exit status equal to 0). If su xxx didn't work, echo $? would never run.
Nontheless, the very next command, which happens to be whoami, prints the actual user that is meant to run all commands in the container and which was defined in the original image. So no matter how many times you run su xxx, all subsequent commands will be run as user root (or another, which was defined in the Dockerfile of the image). So basically the only way to override it on kubernetes level is using already mentioned securityContext:
You need to use security context as like below
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
However I understand that you cannot use this method if you have not previously defined your user in a custom image. This can be done if a user with such uid already exists.
So to the best of my knowledge, it's impossible to do this the way you presented in your question and it's not an issue or a bug. It simply works this way.
If you kubectl exec to your newly created Pod in the interactive mode, you can see that everything works perfectly, user was successfully added and you can switch to this user without any problem:
$ kubectl exec -ti my-pod -- /bin/sh
# tail -2 /etc/passwd
nginx:x:101:101:nginx user,,,:/nonexistent:/bin/false
xxx:x:1000:1000::/home/xxx:/bin/bash
# su xxx
xxx#my-pod:/$ pwd
/
xxx#my-pod:/$ cd
xxx#my-pod:~$ pwd
/home/xxx
xxx#my-pod:~$ whoami
xxx
xxx#my-pod:~$
But it doesn't mean that by running su xxx as one of the commands, provided in a Pod yaml definition, you will permanently change the default user.
I'd like to emphasize it again. In your example su -l xxx runs successfully. It's not true that it doesn't work. In other words: 1. container is started as user root 2. user root runs su -l xxx and once completed successfully, exits 3. user root runs whoami.
So the only reasonable solution is, already mentioned by #dahiya_boy, adding an extra layer and create a custom image.
As to:
#Rosmee YOu can add new docker image layer. and use that image in your
kubernetes. – dahiya_boy 18 hours ago
yes l know that, but as i said above, l cannot modify the original
image, l need to switch user dynamically – Rosmee 17 hours ago
You say "I cannot modify the original image" and this is exactly what custom image is about. No one is talking here about modifying the original image. It remains untouched. By writing your own Dockerfile and e.g. by adding in it an extra user and setting it as a default one, you don't modify the original image at all, but build a new custom image on top of it. That's how it works and that's the way it is meant to be used.
I would like to execute a command in a container (let it be ls) then read the exit code with echo $?
kubectl exec -ti mypod -- bash -c "ls; echo $?" does not work because it returns the exit code of my current shell not the one of the container.
So I tried to use eval on a env varible I defined in my manifest :
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- container2
image: varunuppal/nonrootsudo
env:
- name: resultCmd
value: 'echo $?'
then kubectl exec -ti mypod -- bash -c "ls;eval $resultCmd" but the eval command does not return anything.
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
Note that I can run these two commands within the container
kubectl exec -ti mypod bash
#ls;eval $resultCmd
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
**0**
How can I make it work?
Thanks in advance,
This is happening because you use double quotes instead of single ones.
Single quotes won't substitute anything, but double quotes will.
From the bash documentation:
3.1.2.2 Single
Quotes
Enclosing characters in single quotes (') preserves the literal
value of each character within the quotes. A single quote may not
occur between single quotes, even when preceded by a backslash.
To summarize, this is how your command should look like:
kubectl exec -ti firstpod -- bash -c 'ls; echo $?'
Using the POSIX shell eval command is wrong 99.999% of the time. Even if you ignore the presence of Kubernetes in this question. The problem in your question is that your kubectl command is expanding the definition of $resultCmd in the shell you ran the kubectl command. Specifically due to your use of double-quotes. That interactive shell has no knowledge of the definition of $resultCmd in your "manifest" file. So that shell replaces $resultCmd with nothing.
Thanks Kurtis Rader and Thomas for your answers.
It also works when I precede the $? with a backslash :
kubectl exec -ti firstpod -- bash -c "ls; echo \$?"
I have a docker Image that basically runs a one time script. That scripts takes 3 arguments. My docker file is
FROM <some image>
ARG URL
ARG USER
ARG PASSWORD
RUN apt update && apt install curl -y
COPY register.sh .
RUN chmod u+x register.sh
CMD ["sh", "-c", "./register.sh $URL $USER $PASSWORD"]
When I spin up the contianer using docker run -e URL=someUrl -e USER=someUser -e PASSWORD=somePassword -itd <IMAGE_ID> it works perfectly fine.
Now I want to deploy this as a job.
My basic Job looks like:
apiVersion: batch/v1
kind: Job
metadata:
name: register
spec:
template:
spec:
containers:
- name: register
image: registeration:1.0
args: ["someUrl", "someUser", "somePassword"]
restartPolicy: Never
backoffLimit: 4
But this the pod errors out on
Error: failed to start container "register": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"someUrl\": executable file not found in $PATH"
Looks like it is taking my args as commands and trying to execute them. Is that correct ? What can I do to fix this ?
In the Dockerfile as you've written it, two things happen:
The URL, username, and password are fixed in the image. Anyone who can get the image can run docker history and see them in plain text.
The container startup doesn't take any arguments; it just runs the single command with its fixed set of arguments.
Especially since you're planning to pass these arguments in at execution time, I wouldn't bother trying to include them in the image. I'd reduce the Dockerfile to:
FROM ubuntu:18.04
RUN apt update \
&& DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes --no-install-recommends \
curl
COPY register.sh /usr/bin
RUN chmod u+x /usr/bin/register.sh
ENTRYPOINT ["register.sh"]
When you launch it, the Kubernetes args: get passed as command-line parameters to the entrypoint. (It is the same thing as the Docker Compose command: and the free-form command at the end of a plain docker run command.) Making the script be the container entrypoint will make your Kubernetes YAML work the way you expect.
In general I prefer using CMD to ENTRYPOINT. (Among other things, it makes it easier to docker run --rm -it ... /bin/sh to debug your image build.) If you do that, then the Kubernetes args: need to include the name of the script it's running:
args: ["./register.sh", "someUrl", "someUser", "somePassword"]
Use:
args: ["sh", "-c", "./register.sh someUrl someUser somePassword"]
I have the following lines in a Dockerfile where I want to set a value in a config file to a default value before the application starts up at the end and provide optionally setting it using the -e option when starting the container.
I am trying to do this using Docker's ENV commando
ENV CONFIG_VALUE default_value
RUN sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
CMD command_to_start_app
I have the string CONFIG_VALUE explicitly in the file CONFIG_FILE and the default value from the Dockerfile gets correctly substituted. However, when I run the container with the added -e CONFIG_VALUE=100 the substitution is not carried out, the default value set in the Dockerfile is kept.
When I do
docker exec -i -t container_name bash
and echo $CONFIG_VALUE inside the container the environment variable does contain the desired value 100.
Instructions in the Dockerfile are evaluated line-by-line when you do docker build and are not re-evaluated at run-time.
You can still do this however by using an entrypoint script, which will be evaluated at run-time after any environment variables have been set.
For example, you can define the following entrypoint.sh script:
#!/bin/bash
sed -i 's/CONFIG_VALUE/'"$CONFIG_VALUE"'/g' CONFIG_FILE
exec "$#"
The exec "$#" will execute any CMD or command that is set.
Add it to the Dockerfile e.g:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Note that if you have an existing entrypoint, you will need to merge it with this one - you can only have one entrypoint.
Now you should find that the environment variable is respected i.e:
docker run -e CONFIG_VALUE=100 container_name cat CONFIG_FILE
Should work as expected.
That shouldn't be possible in a Dockerfile: those instructions are static, for making an image.
If you need runtime instruction when launching a container, you should code them in a script called by the CMD directive.
In other words, the sed would take place in a script that the CMD called. When doing the docker run, that script would have access to the environment variable set just before said docker run.