How to switch user(su) in docker command - kubernetes

l want to launch a container with non-root user, but l cannot modify the origin Dockerfile, Or l know l can do something like Run useradd xx then User xx in Dockerfile to achieve that.
What l am doing now is modifying the yaml file like the following:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: xxx
command:
- /bin/sh
- -c
- |
useradd xx -s /bin/sh;
su -l xx; // this line is not working
sleep 1000000;
when l exec into the pod, the default is still the root user, anyone can help with that? Thanks in advance!

You need to use security context as like below
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
Reference: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#podsecuritycontext-v1-core
EDIT:
If you wanted to change the user in you container than, you can add extra layer of dockerfile, check below
Add dockerfile layer,
FROM <your_image>
RUN adduser newuser
USER newuser
:
:
Now use above custom image in your kubernetes.

+1 to dahiya_boy's answer however I'd like to add also my 3 cents to what was already said.
I've reproduced your case using popular nginx image. I also modified a bit commands from your example so that home directory for the user xxx is created as well as some other commands for debugging purpose.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-pod
image: nginx
command:
- /bin/sh
- -c
- |
useradd xxx -ms /bin/bash;
su xxx && echo $?;
whoami;
sleep 1000000;
After successfully applying the above yaml we can run:
$ kubectl logs my-pod
0
root
As you can see the exit status of the echo $? command is 0 which means that previous command in fact ran successfully. Even more: the construction with && implies that second command is run if and only if the first command completed successfully (with exit status equal to 0). If su xxx didn't work, echo $? would never run.
Nontheless, the very next command, which happens to be whoami, prints the actual user that is meant to run all commands in the container and which was defined in the original image. So no matter how many times you run su xxx, all subsequent commands will be run as user root (or another, which was defined in the Dockerfile of the image). So basically the only way to override it on kubernetes level is using already mentioned securityContext:
You need to use security context as like below
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
However I understand that you cannot use this method if you have not previously defined your user in a custom image. This can be done if a user with such uid already exists.
So to the best of my knowledge, it's impossible to do this the way you presented in your question and it's not an issue or a bug. It simply works this way.
If you kubectl exec to your newly created Pod in the interactive mode, you can see that everything works perfectly, user was successfully added and you can switch to this user without any problem:
$ kubectl exec -ti my-pod -- /bin/sh
# tail -2 /etc/passwd
nginx:x:101:101:nginx user,,,:/nonexistent:/bin/false
xxx:x:1000:1000::/home/xxx:/bin/bash
# su xxx
xxx#my-pod:/$ pwd
/
xxx#my-pod:/$ cd
xxx#my-pod:~$ pwd
/home/xxx
xxx#my-pod:~$ whoami
xxx
xxx#my-pod:~$
But it doesn't mean that by running su xxx as one of the commands, provided in a Pod yaml definition, you will permanently change the default user.
I'd like to emphasize it again. In your example su -l xxx runs successfully. It's not true that it doesn't work. In other words: 1. container is started as user root 2. user root runs su -l xxx and once completed successfully, exits 3. user root runs whoami.
So the only reasonable solution is, already mentioned by #dahiya_boy, adding an extra layer and create a custom image.
As to:
#Rosmee YOu can add new docker image layer. and use that image in your
kubernetes. – dahiya_boy 18 hours ago
yes l know that, but as i said above, l cannot modify the original
image, l need to switch user dynamically – Rosmee 17 hours ago
You say "I cannot modify the original image" and this is exactly what custom image is about. No one is talking here about modifying the original image. It remains untouched. By writing your own Dockerfile and e.g. by adding in it an extra user and setting it as a default one, you don't modify the original image at all, but build a new custom image on top of it. That's how it works and that's the way it is meant to be used.

Related

First argument is not passed to image in kubernetes deployments

I have a docker image with below entrypoint.
ENTRYPOINT ["sh", "-c", "python3 -m myapp ${*}"]
I tried to pass arguments to this image in my kubernetes deployments so that ${*} is replaced with them, but after checking the logs it seem that the first argument was ignored.
I tried to reproduce the result regardless of image, and applied below pod:
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test
image: postgres # or any image you may like
command: ["bash -c /bin/echo ${*}"]
args:
- sth
- serve
- arg
when I check the logs, I just see serve arg, and sth is completely ignored.
Any idea on what went wrong or what should I do to pass arguments to exec-style entrypoints instead?
First, your command has quoting problems -- you are effectively running bash -c echo.
Second, you need to closely read the documentation for the -c option (emphasis mine):
If the -c option is present, then commands are read from
the first non-option argument command_string. If there
are arguments after the command_string, the first argument
is assigned to $0 and any remaining arguments are assigned
to the positional parameters. The assignment to $0 sets
the name of the shell, which is used in warning and error
messages.
So you want:
command: ["bash", "-c", "echo ${*}", "bash"]
Given your pod definition, this would set $0 to bash, and then $1 to sth, $2 to serve, and $3 to arg.
There are some subtleties around using sh -c here. For the examples you show, it's not necessary. The important things to remember are that the ENTRYPOINT and CMD are combined together into a single command (or, in Kubernetes, command: and args:), and that sh -c generally takes only a single string argument and acts on it.
The examples you show don't use any shell functionality and you can break the commands into their constituent words as YAML list items.
command:
- /bin/echo
- sth
- serve
- arg
For the Dockerfile case, there is a pattern of using ENTRYPOINT to specify a command and CMD for its arguments, which parallels Kubernetes's syntax here. For this to work well, I'd avoid sh -c (including the implicit sh -c from the ENTRYPOINT shell form); just provide the first set of words in JSON-array form.
ENTRYPOINT ["python", "-m", "myapp"]
# don't override command:, the image's ENTRYPOINT is right, but do add
args:
- foo
- bar
- baz
(If your entrypoint setup is complex enough to require shell operators, it's typically easier to write and debug to move it into a dedicated script and make that script be the ENTRYPOINT or CMD, rather than trying to figure out sh -c semantics and YAML quoting.)

kubernetes execute a command in a env. varible with eval

I would like to execute a command in a container (let it be ls) then read the exit code with echo $?
kubectl exec -ti mypod -- bash -c "ls; echo $?" does not work because it returns the exit code of my current shell not the one of the container.
So I tried to use eval on a env varible I defined in my manifest :
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- container2
image: varunuppal/nonrootsudo
env:
- name: resultCmd
value: 'echo $?'
then kubectl exec -ti mypod -- bash -c "ls;eval $resultCmd" but the eval command does not return anything.
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
Note that I can run these two commands within the container
kubectl exec -ti mypod bash
#ls;eval $resultCmd
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
**0**
How can I make it work?
Thanks in advance,
This is happening because you use double quotes instead of single ones.
Single quotes won't substitute anything, but double quotes will.
From the bash documentation:
3.1.2.2 Single
Quotes
Enclosing characters in single quotes (') preserves the literal
value of each character within the quotes. A single quote may not
occur between single quotes, even when preceded by a backslash.
To summarize, this is how your command should look like:
kubectl exec -ti firstpod -- bash -c 'ls; echo $?'
Using the POSIX shell eval command is wrong 99.999% of the time. Even if you ignore the presence of Kubernetes in this question. The problem in your question is that your kubectl command is expanding the definition of $resultCmd in the shell you ran the kubectl command. Specifically due to your use of double-quotes. That interactive shell has no knowledge of the definition of $resultCmd in your "manifest" file. So that shell replaces $resultCmd with nothing.
Thanks Kurtis Rader and Thomas for your answers.
It also works when I precede the $? with a backslash :
kubectl exec -ti firstpod -- bash -c "ls; echo \$?"

Is it possible to install curl into busybox in kubernetes pod

I am using busybox to detect my network problem in kubernetes v1.18 pods. I created the busybox like this:
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
and login to find the kubernetes clusters network situation:
kubectl exec -it busybox /bin/bash
What surprises me is that the busybox does not contain curl. Why does the busybox package not include the curl command? I am searching the internet and find the docs do not talk about how to add curl into busybox. I tried to install curl, but found no way to do this. Is there anyway to add curl package into busybox?
The short answer, is you cannot.
Why?
Because busybox does not have package manager like: yum, apk, or apt-get ..
Acutally you have two solutions:
1. Either use a modified busybox
You can use other busybox images like progrium/busybox which provides opkg-install as a package manager.
image: progrium/busybox
Then:
kubectl exec -it busybox -- opkg-install curl
2. Or if your concern to use a minimal image, you can use alpine
image: alpine:3.12
then:
kubectl exec -it alpine -- apk --update add curl
No. Consider alpine as a base image instead that includes BusyBox plus a package manager, or building (or finding) a custom image that has the tools you need pre-installed.
BusyBox is built as a single binary that contains implementations of many common Linux tools. The BusyBox documentation includes a listing of the included commands. You cannot "install" more commands into it without writing C code.
BusyBox does contain an implementation of wget, which might work for your purposes (wget -O- http://other-service).
BusyBox has a subset of wget. The usage patterns of curl are significantly more complex in your OS than the one that comes with Busybox.
To clarify what I mean, run the following in your OS:
$ wget --help | wc -l
207
while running wget's help inside Busybox container should give you a minimal subset package:
$ docker run --rm busybox wget --help 2>&1 | wc -l
20
In K8s, you could run the following:
$ kubectl run -i --tty --rm busybox --image=busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # wget
BusyBox v1.33.1 (2021-06-07 17:33:50 UTC) multi-call binary.
Usage: wget [-cqS] [--spider] [-O FILE] [-o LOGFILE] [--header 'HEADER: VALUE'] [-Y on/off]
[--no-check-certificate] [-P DIR] [-U AGENT] [-T SEC] URL...
Retrieve files via HTTP or FTP
--spider Only check URL existence: $? is 0 if exists
--no-check-certificate Don't validate the server's certificate
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-S Show server response
-T SEC Network read timeout is SEC seconds
-O FILE Save to FILE ('-' for stdout)
-o LOGFILE Log messages to FILE
-U STR Use STR for User-Agent header
-Y on/off
If curl is something required for your use case, I wouldsuggest to use Alpine which is busybox + a minimal package manager and libc implementation such that you can trivially do apk add --no-cache curl and get real curl (or even apk add --no-cache wget to get the "real" wget instead of BusyBox's wget).
As others said, the answer is no and you need to use another image.
There is:
Official curl alpine based image: https://hub.docker.com/r/curlimages/curl with curlimages/curl
Busyboxplus Images: https://hub.docker.com/r/radial/busyboxplus with radial/busyboxplus:curl
Nixery with nixery.dev/curl
Image sizes:
$ docker images -f "reference=*/*curl"
REPOSITORY TAG IMAGE ID CREATED SIZE
curlimages/curl latest ab35d809acc4 9 days ago 11MB
radial/busyboxplus curl 71fa7369f437 8 years ago 4.23MB
nixery.dev/curl latest aa552b5bd167 N/A 56MB
As #abdennour is suggesting, I'm no longer sticking with busybox anymore. Alpine is a very lightweight Linux container image as others suggest here in which you can literally install any UNIX-like tool handy to accomplish your troubleshooting task. In fact, I use this function within my dotfiles at .bashrc to spin a handy ephemeral ready-to-rock Alpine pod:
## This function takes an optional argument to run a pod within a Kubernetes NS, if it's not provided it fallsback to `default` NS.
function kalpinepod () { kubectl run -it --rm --restart=Never --image=alpine handytools -n ${1:-default} -- /bin/ash }
❯ kalpinepod kube-system
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
search kube-system.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.245.0.10
options ndots:5
/ # apk --update add curl openssl
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.13/community/x86_64/APKINDEX.tar.gz
(1/6) Installing ca-certificates (20191127-r5)
(2/6) Installing brotli-libs (1.0.9-r3)
(3/6) Installing nghttp2-libs (1.42.0-r1)
(4/6) Installing libcurl (7.74.0-r1)
(5/6) Installing curl (7.74.0-r1)
(6/6) Installing openssl (1.1.1j-r0)
Executing busybox-1.32.1-r3.trigger
Executing ca-certificates-20191127-r5.trigger
OK: 9 MiB in 20 packages
Or just copy a statically built curl into Busybox:
https://github.com/moparisthebest/static-curl/releases
Radial has an overlay of busybox images adding cURL. docker pull radial/busyboxplus:curl
They also have a second images having cURL + Git. docker pull radial/busyboxplus:git
Install the curl binary from the source website
Replace binary-url with the URL of the binary file found from curl.se
export BINARY_URL="<binary-url>"
wget $BINARY_URL -O curl && install curl /bin; rm -f curl
Worked with busybox:latest image

Send arguments to a Job

I have a docker Image that basically runs a one time script. That scripts takes 3 arguments. My docker file is
FROM <some image>
ARG URL
ARG USER
ARG PASSWORD
RUN apt update && apt install curl -y
COPY register.sh .
RUN chmod u+x register.sh
CMD ["sh", "-c", "./register.sh $URL $USER $PASSWORD"]
When I spin up the contianer using docker run -e URL=someUrl -e USER=someUser -e PASSWORD=somePassword -itd <IMAGE_ID> it works perfectly fine.
Now I want to deploy this as a job.
My basic Job looks like:
apiVersion: batch/v1
kind: Job
metadata:
name: register
spec:
template:
spec:
containers:
- name: register
image: registeration:1.0
args: ["someUrl", "someUser", "somePassword"]
restartPolicy: Never
backoffLimit: 4
But this the pod errors out on
Error: failed to start container "register": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "exec: \"someUrl\": executable file not found in $PATH"
Looks like it is taking my args as commands and trying to execute them. Is that correct ? What can I do to fix this ?
In the Dockerfile as you've written it, two things happen:
The URL, username, and password are fixed in the image. Anyone who can get the image can run docker history and see them in plain text.
The container startup doesn't take any arguments; it just runs the single command with its fixed set of arguments.
Especially since you're planning to pass these arguments in at execution time, I wouldn't bother trying to include them in the image. I'd reduce the Dockerfile to:
FROM ubuntu:18.04
RUN apt update \
&& DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes --no-install-recommends \
curl
COPY register.sh /usr/bin
RUN chmod u+x /usr/bin/register.sh
ENTRYPOINT ["register.sh"]
When you launch it, the Kubernetes args: get passed as command-line parameters to the entrypoint. (It is the same thing as the Docker Compose command: and the free-form command at the end of a plain docker run command.) Making the script be the container entrypoint will make your Kubernetes YAML work the way you expect.
In general I prefer using CMD to ENTRYPOINT. (Among other things, it makes it easier to docker run --rm -it ... /bin/sh to debug your image build.) If you do that, then the Kubernetes args: need to include the name of the script it's running:
args: ["./register.sh", "someUrl", "someUser", "somePassword"]
Use:
args: ["sh", "-c", "./register.sh someUrl someUser somePassword"]

Ansible: have sudo but no root

I’d like to use Ansible to manage the configuration of a our Hadoop cluster (running Red Hat).
I have sudo access and can manually ssh into the nodes to execute commands. However, I’m experiencing problems when I try to run Ansible modules to perform the same tasks. Although I have sudo access, I can’t become root. When I try to execute Ansible scripts that require elevated privileges, I get an error like this:
Sorry, user awoolford is not allowed to execute '/bin/bash -c echo
BECOME-SUCCESS- […] /usr/bin/python
/tmp/ansible-tmp-1446662360.01-231435525506280/copy' as awoolford on
[some_hadoop_node].
Looking through the documentation, I thought that the become_allow_same_user property might resolve this, and so I added the following to ansible.cfg:
[privilege_escalation]
become_allow_same_user=yes
Unfortunately, it didn't work.
This post suggests that I need permissions to sudo /bin/sh (or some other shell). Unfortunately, that's not possible for security reasons. Here's a snippet from /etc/sudoers:
root ALL=(ALL) ALL
awoolford ALL=(ALL) ALL, !SU, !SHELLS, !RESTRICT
Can Ansible work in an environment like this? If so, what am I doing wrong?
Well, you simply cannot execute /bin/sh or /bin/bash as your /etc/sudoers shows. What you could do is change ansible's default shell to something else (variable executable in ansible.conf).
Since your sudo policy allows everything by default (does not seem like really secure to me), and I suppose ansible expects an sh-compatible shell, as a really dirty hack you could copy /bin/bash to some other path/name and set the executable variable accordingly (not tested).
In the playbook (some.yml) file, set
runthisplaybook.yml
---
- hosts: label_which_will_work_on_some_servers
sudo: yes
roles:
- some_role_i_want_to_run
Next, in the role//tasks/main.yml for the action which you have to run as sudo.. use something like become_user (where common_user is a variable defined in some role's defaults\main.yml file as common_user: "this_user_can_sudo":
- name: Run chkconfig on init script
command: "sudo -u root /sbin/chkconfig --add tomcat"
# Set execute permission on run_jmeter_test.sh
- name: Set execute permission on run_jmeter_test.sh
command: "chmod -R 755 {{ jmeter_perf_tests_results }}"
become_user: "{{ common_user }}"
# OR Set execute permission on run_jmeter_test.sh
- name: Set execute permission on run_jmeter_test.sh
command: "sudo -u firstuser sudo -u seconduser chmod -R 755 {{ jmeter_perf_tests_results }}"
become_user: "{{ common_user }}"
# OR Set execute permission on run_jmeter_test.sh
- name: Set execute permission on run_jmeter_test.sh
command: "chmod -R 755 {{ jmeter_perf_tests_results }}"
become_user: "{{ common_user }}"
PS: While running ansible-playbook,
ansible-playbook runthisplaybook.yml --sudo-user=this_user_can_sudo -i hosts.yml -u user_which_will_connect_from_source_machine --private-key ${DEPLOYER_KEY_FILE} --extra-vars "target_svr_type=${server_type} deploy_environment=${DEPLOY_ENVIRONMENT} ansible_user=${ANSIBLE_USER}"
After a research over the subject, as of Ansible 2.8 it doesn't seem you have a way to run commands as a different user using become without root permissions.
There's another way to achieve what you were asking without being so, how to put it, 'hacky'.
You can use the shell module with sudo su - <user> -c "COMMAND" to execute a command as a different user, without the need for root access to the original user.
For example,
1 ---
2 - hosts: target_host
3
4 tasks:
5 - shell: 'sudo su EXEC_USER -c "whoami"'
6 register: x
7
8 - debug:
9 msg: "{{ x.stdout_lines }}" # This returns EXEC_USER
However, if your play is complex, you would need to break it down and wrap only the commands that are required to be executed as different user.
This isn't best practice (using sudo + shell instead of become), however that's a solution, and in my opinion a better one than creating dummy shell on every node you manage.
I think now sudo: yes is depricated and replace with become: yes
---
- hosts: servers_on_which_you_want_to_run
become: yes
roles:
- some_role
The smiplist solution is just create a ansible.cfg in your playbook directory with the following content, if it doesn't accept root user:
[defaults]
sudo_user = UsernameToWhichYouWantToUse
Hope, this will solve your problem.