Jenkins kubernetes build user permission problem - kubernetes

I am trying to do some ssh + git sub module builds, and initially I was getting this error when trying to pull code:
Submodule 'xxx' (git#host:thing/subdir/repo.git) registered for path 'repo'
Submodule 'yyy' (git#host:thing/subdir/yyy.git) registered for path 'yyy'
Cloning into '/home/jenkins/agent/workspace/thing/AAA/releaser/xxx'...
No user exists for uid 1000
fatal: Could not read from remote repository.
Which led me to this solution:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
Now, this worked, but I was seeing some non critical errors about not being able to save the host key to /.ssh, but it still checked out the code and so I didn't care since every run is like it's the first time.
However, I am now running into a different problem that really requires me to solve this user issue permanently:
+ mc alias set jenkins-user ****
mc: <ERROR> Unable to save new mc config. mkdir /.mc: permission denied.
I need to run the minio mc client - and when I give it creds it tries to save them to /.mc and it can't and fails the aliasing command, and thus the whole build.
So, I need to figure out a better way to solve the initial user problem, or tell it that, when running in this pod, to use a non privileged directory like /tmp.
So, I am open to a solution to either issue if someone can assist

Ok, I ended up solving it with a mix of Kubernetes changes and Dockerfile changes.
First, I had to add the Jenkins user with the right user id to a custom docker image that contained what I needed
FROM minio/mc
RUN \
microdnf update --nodocs && \
microdnf install git zip findutils --nodocs && \
microdnf clean all
RUN adduser --home-dir /home/jenkins/ --shell /bin/sh --uid 1000 jenkins
USER jenkins
WORKDIR /home/jenkins
ENTRYPOINT ["sh"]
Then, I had to update the pod definition to run under the same user that it was complaining about before:
apiVersion: v1
kind: Pod
metadata:
annotations: {}
spec:
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
containers:
-
name: releaser
image: docker.io/chb0docker/releaser:latest
imagePullPolicy: Always
command:
- cat
tty: true

Related

How can I give my non-root user write permissions on a kubernetes volume mount?

From my understanding (based on this guide https://snyk.io/blog/10-kubernetes-security-context-settings-you-should-understand/), if I have the following security context specified for some kubernetes pod
securityContext:
# Enforce to be run as non-root user
runAsNonRoot: true
# Random values should be fine
runAsUser: 1001
runAsGroup: 1001
# Automatically convert mounts to user group
fsGroup: 1001
# For whatever reasons this is not working
fsGroupChangePolicy: "Always"
I expect this pod to be run as user 1001 with the group 1001. This is working as expected, because running id in the container results in: uid=1001 gid=1001 groups=1001.
The file system of all mounts should automatically be accessible by user group 1001, because we specified fsGroup and fsGroupChangePolicy. I guess that this also works because when running ls -l in one of the mounted folders, I can see that the access rights for the files look like this: -rw-r--r-- 1 50004 50004. Ownership is still with the uid and gid it was initialised with but I can see that the file is now readable for others.
The question now is how can I add write permission for my fsGroup those seem to still be missing?
You need to add an additional init_container in your pod/deployment/{supported_kinds} with the commands to give/change the permissions on the volume mounted on the pod to the userID which container is using while running.
initContainers:
- name: volumepermissions
image: busybox ## any image having linux utilities mkdir,echo,chown will work.
imagePullPolicy: IfNotPresent
env:
- name: "VOLUME_DATA_DIR"
value: mountpath_for_the_volume
command:
- sh
- -c
- |
mkdir -p $VOLUME_DATA_DIR
chown -R 1001:1001 $VOLUME_DATA_DIR
echo 'Volume permissions OK ✓'
volumeMounts:
- name: data
mountPath: mountpath_for_the_volume
This is necessary when a container in a pod is running as a user other than root and needs write permissions on a mounted volume.
if this is a helm template this init container can be created as a template and used in all the pods/deployments/{supported kinds} to change the volume permissions.
Update: As mentioned by #The Fool, it should be working as per the current setup if you are using Kubernetes v1.23 or greater. After v1.23 securityContext.fsGroup and securityContext.fsGroupChangePolicy features went into GA/stable.

What capabilites are required to restart a process of another container (without being root)?

What additional capabilities does a sidecar container need so that it can restart a process in another container without being root?
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
shareProcessNamespace: true
containers:
- name: nginx
image: nginx
securityContext:
privileged: false
allowPrivilegeEscalation: false
- name: shell
image: busybox:1.28
securityContext:
capabilities:
add:
- SYS_PTRACE
runAsUser: 12345
runAsGroup: 12345
privileged: false
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
stdin: true
tty: true
If you run the shell container as root, you can have the nginx process restarted from the second container with no problem. However, if I run the container as non-root like in the pod.yaml above, I get an error message saying I'm not allowed to restart the process.
kubectl apply -f pod.yaml
kubectl attach -it nginx -c shell
# in the shell container
/ $ ps
PID USER TIME COMMAND
1 65535 0:00 /pause
7 root 0:00 nginx: master process nginx -g daemon off;
/ $ kill -HUP 7
sh: can't kill pid 7: Operation not permitted
The two services just provide a simple example to describe the problem. My plan is that I want to restart the process in the other container from a sidecar container (here the shell container) if a configuration (from a configmap) changes that this process only gets at the beginning.
You can get a list of the current capabilities from your container to create a copy from them and apply them in your yaml file. This document describes how to use the following command to get this list.
$sudo docker run --rm -it --cap-drop $CAP alpine sh
Also, I noticed you need to add the following line below the securityContext:
runAsNonRoot:true
In addition, please note that when you are working with a non root user in Nginx, you will need to follow these recommendations, and finally you can find the capability full list in this link.

What is "/usr/bin/nsenter -m/proc/1/ns/mnt" in Kubernetes Daemonset?

I have read some tutorials of how to mount a volume in container and run the script on host/node directly. These are the examples given.
DeamonSet pod spec
hostPID: true
nodeSelector:
cloud.google.com/gke-local-ssd: "true"
volumes:
- name: setup-script
configMap:
name: local-ssds-setup
- name: host-mount
hostPath:
path: /tmp/setup
initContainers:
- name: local-ssds-init
image: marketplace.gcr.io/google/ubuntu1804
securityContext:
privileged: true
volumeMounts:
- name: setup-script
mountPath: /tmp
- name: host-mount
mountPath: /host
command:
- /bin/bash
- -c
- |
set -e
set -x
# Copy setup script to the host
cp /tmp/setup.sh /host
# Copy wait script to the host
cp /tmp/wait.sh /host
# Wait for updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/wait.sh
# Give execute priv to script
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/setup.sh
# Wait for Node updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/wait.sh
# If the /tmp folder is mounted on the host then it can run the script
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/setup.sh
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
(There is a configmap for composing the .sh files. I just skip that)
What does "/usr/bin/nsenter -m/proc/1/ns/mnt" mean? Is this a command to run something on host? what is "/proc/1/ns/mnt" ?
Lets start from the namepaces to understand this in detail :
Namespaces in container helps to isolate resources among the process. Namespaces controls the resources from the kernal and allocate to the process. This provides a great isolation among different containers that may run in a system.
Having said that, it will also make things complicated with these access restrictions to the namespaces. so comes the nsenter command , which will give the conatiners access to the namespaces. something similar to the sudo command.
.This command can give us access to mount, UTS, IPC, Network, PID,user,cgroup, and time namespaces.
the -m in your example is --mount which will access to the mount namespace specified by that file.

Volume Write Permissions

How can I give the non-root user full access to the mounted volume path in Kubernetes (pod)?
I'm using a volume on the host (/workspace/projects path) and writing to the directory as below.
volumeMounts:
-name: workspace
mountPath: /workspace/projects
Since, I'm copying the git repository content to the /projects directory, the git sets the permission to 755 by default. I want to set the permission to 775 as unable to write to the /project directory.
Could you please let me know what is the best way to do this? I saw InitContainers and not sure whether there is any better solution.Appreciate any help! Thanks in advance!
When you have to run process inside container as non-root user and you mount a volume to pod. But the volume have root:root permission.
To give access to specific user initContainer is one way, like following
initContainers:
- name: volume-mount-permission
image: busybox
command: ["sh", "-c", "chmod 775 /workspace/projects && chown -R <user> /workspace/projects"]
volumeMounts:
-name: workspace
mountPath: /workspace/projects
You can also use security context. Create user and group, add user to the group in Dockerfile and set following in spec
spec:
securityContext:
runAsUser: <UID>
runAsGroup: <GID>
fsGroup: <GID>

Kubernetes can not mount a volume to a folder

I am following these docs on how to setup a sidecar proxy to my cloud-sql database. It refers to a manifest on github that -as I find it all over the place on github repos etc- seems to work for 'everyone' but I run into trouble. The proxy container can not mount to /secrets/cloudsql it seems as it can not succesfully start. When I run kubectl logs [mypod] cloudsql-proxy:
invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
So the secret seems to be the problem.
Relevant part of the manifest:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=pqbq-224713:europe-west4:osm=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credential
secret:
secretName: mysecret
To test/debug the secret I mount the volume to an another container that does start, but then the path and file /secrets/cloudsql/mysecret.json does not exist either. However when I mount the secret to an already EXISTING folder I can find in this folder not the mysecret.json file (as I expected...) but (in my case) two secrets it contains, so I find: /existingfolder/password and /existingfolder/username (apparently this is how it works!? When I cat these secrets they give the proper strings, so they seem fine).
So it looks like the path can not be made by the system, is this a permission issue? I tried simply mounting in the proxy container to the root ('/') so no folder, but that gives an error saying it is not allowed to do so. As the image gcr.io/cloudsql-docker/gce-proxy:1.11 is from Google and I can not get it running I can not see what folder it has.
My questions:
Is the mountPath created from the manifest or should it be already
in the container?
How can I get this working?
I solved it. I was using the same secret on the cloudsql-proxy as the ones used on the app (env), but it needs to be a key you generate from a service account and then make a secret out of that. Then it works. This tutorial helped me through.