Helm chart MongoDb cannot create directory permisions - kubernetes

I try to deploy mongodb with helm and it gives this error:
mkdir: cannot create directory /bitnami/mongodb/data : permision denied.
I also tried this solution:
sudo chown -R 1001 /tmp/mongo
but it says no this directory.

You have permission denied on /bitnami/mongodb/data and you are trying to modify another path: /tmp/mongo. It is possible that you do not have such a directory at all.
You need to change the owner of the resource for which you don't have permissions, not random (non-related) paths :)
You've probably seen this github issue and this answer:
You are getting that error message because the container can't mount the /tmp/mongo directory you specified in the docker-compose.yml file.
As you can see in our changelog, the container was migrated to the non-root user approach, that means that the user 1001 needs read/write permissions in the /tmp/mongo folder so it can be mounted and used. Can you modify the permissions in your local folder and try to launch the container again?
sudo chown -R 1001 /tmp/mongo
This method will work if you are going to mount the /tmp/mongo folder, which is actually not quite a common behavior. Look for another answer:
Please note that mounting host path volumes is not the usual way to work with these containers. If using docker-compose, it would be using docker volumes (which already handle the permission issue), the same would apply with Kubernetes and the MongoDB helm chart, which would use the securityContext section to ensure the proper permissions.
In your situation, you'll just have change owner to the path /bitnami/mongodb/data or to use Security Context on your Helm chart and everything should work out for you.
Probably here you can find the most interesting part with example context:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"

Related

Adding SSL Certs via mounted volume causing permissions issue

In my docker-compose.yml, I'm using the following to mount SSL certs into my container:
- ./certs:/var/lib/postgresql/certs
The ./certs folder and everything within it is owned by root locally.
However, upon starting the container, I receive:
2022-08-26 20:04:40.623 UTC [1] FATAL: could not load server certificate file "/var/lib/postgresql/certs/db.crt": Permission denied
Updating the permissions locally to anything else (777,755, etc..) results in a separate error:
FATAL: private key file "/var/lib/postgresql/certs/postgresdb.key" has group or world access
I realize I can copy the certs via my Dockerfile, but I'd rather not have to build a new image each time I want to change certificates. What is the best way to go about handling this?
Change the ownership of the certs to the user that's used inside the container, before you start the container.
You need to double-check the id of the user, since you didn't show what image you run. Below is an example.
sudo chmod -R 400 ./certs
sudo chown -R 5432:5432 ./certs
Alternatively, you can run the container with your local user ID. I only recommend this for development purpose.
docker run --user "$(id -u)" postgres
In that case, also make sure your local user has permissions on the certs dir.

bitnami/kubectl container unable to create files - permission denied

I have used "bitnami/kubectl:latest" image to my test-c container which runs inside a test pod. I just login to that container and I wanted to create a file inside that container. But ended up with below error.
I have no name!#coredns-hook-c92g7:/$ touch test
touch: cannot touch 'test': Permission denied
I have no name!#coredns-hook-c92g7:/$ mkdir test
mkdir: cannot create directory 'test': Permission denied
Can someone help me to understand why this problem occurs and how to fix this? I am aware that mounting the file as configmap might help me but just I need to understand this issue. Thanks in advance!
The issue here is that the docker image which you are using is configured to run its final instruction using a non-root user (USER 1001 in this case).
Have a look at the Dockerfile instruction:
https://github.com/bitnami/bitnami-docker-kubectl/blob/master/1.24/debian-11/Dockerfile#L24
So you can either
create files in a non-root user owned directory like /tmp or
create your own docker image after removing that USER 1001 instruction from the Dockerfile and host it in your own repository which can then be pulled into your cluster.
Whatever works for you.
Hope this helps!

Bash script mounted as configmap with 777 permissions cannot be ran

This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?
See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.
It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation

Error: Permission denied - /usr/share/logstash/sincedb/sincedb

I have a docker-compose file for running several containers including Logstash. I have mapped the mounted sincedb as in the snippet:
logstash:
build:
context: logstash/
volumes:
- ./tmp/logstash/sincedb:/usr/share/logstash/sincedb
The Logstash container has some permission errors, in particular with accessing sincedb as shown in the error snippet below:
Error: Permission denied - /usr/share/logstash/sincedb/sincedb
Exception: Errno::EACCES
I tried to execute within the container chmod but I get some the errors below:
bash-4.2$ chmod o+wx /usr/share/logstash/sincedb/
chmod: changing permissions of ‘/usr/share/logstash/sincedb/’: Operation not permitted
Is there a way to overcome this permission issue ?
I was able to resolve the issue by setting proper permissions to the host folder, which maps to the folder in the docker container. By issuing the command chmod -R 757 against the folder, access was possible. However, this was a temporary measure, I later discovered the correct permissions can be set in the docker-compose.yml file by appending :rw at the end of the specific line, like so :
volumes:
- logstash_data:/usr/share/logstash/sincedb:rw
This effectively maintained the permissions across rebuilds, which is a limitation of the earlier mentioned method (additional to the security implications)
you could try to use docker named volumes. Currently you are just mounting host folder to the container
Sample docker-compose.yml with named volumes
version: '3.5'
services:
logstash:
build:
context: logstash/
volumes:
- logstash_data:/usr/share/logstash/sincedb
volumes:
logstash_data: # optionaly define more parameters
then you can see the named volume with command
docker volume ls

How can I move postgresql data to another directory on Ubuntu over Amazon EC2?

We've been running postgresql 8.4 for quite some time. As with any database, we are slowly reaching our threshold for space. I added another 8 GB EBS drive and mounted it to our instance and configured it to work properly on a directory called /files
Within /files, I manually created
Correct me if I'm wrong, but I believe all postgresql data is stored in /var/lib/postgresql/8.4/main
I backed up the database and I ran sudo /etc/init.d/postgresql stop. This stops the postgresql server. I tried to copy and paste the contents of /var/lib/postgresql/8.4/main into the /files directory but that turned out be a HUGE MESS! due to file permissions. I had to go in and chmod the contents of that folder just so that I could copy and paste them. Some files did not copy fully because of root permissions. I modified the data_directory parameter in postgresql.conf to point to the files directory
data_directory = '/files/postgresql/main'
and I ran sudo /etc/init.d/postgresql restart and the server failed to start. Again probably due to permission issues. Amazon EC2 only allows you to access the service as ubuntu by default. You can only access root from within the terminal which makes everything a lot more complicated.
Is there a much cleaner and more efficient step by step way of doing this?
Stop the server.
Copy the datadir while retaining permissions - use cp -aRv.
Then (easiest, as it avoids the need to modify initscripts) just move the old datadir aside and symlink the old path to the new location.
Thanks for the accepted answer. Instead of the symlink you can also use a bind mount. That way it is independent from the file system. If you want to use a dedicated hard drive for the database you can also mount it normally. to the data directory.
I did the latter. Here are my steps if someone needs a reference. I ran this as a script on many AWS instances.
# stop postgres server
sudo service postgresql stop
# create new filesystem in empty hard drive
sudo mkfs.ext4 /dev/xvdb
# mount it
mkdir /tmp/pg
sudo mount /dev/xvdb /tmp/pg/
# copy the entire postgres home dir content
sudo cp -a /var/lib/postgresql/. /tmp/pg
# mount it to the correct directory
sudo umount /tmp/pg
sudo mount /dev/xvdb /var/lib/postgresql/
# see if it is mounted
mount | grep postgres
# add the mount point to fstab
echo "/dev/xvdb /var/lib/postgresql ext4 rw 0 0" | sudo tee -a /etc/fstab
# when database is in use, observe that the correct disk is being used
watch -d grep xvd /proc/diskstats
A clarification. It is the particular AMI that you used that sets ubuntu as the default user, this may not apply to other AMIs.
In essence if you are trying move data manually, you will probably need to do so as the root user, and then make sure its available to whatever user postgres is running with.
You also do have the option of snapshotting the volume and increasing the size of the a volume created from the snapshot. Then you could replace the volume on your instance with the new volume (You probably will have to resize the partition to take advantage of all the space).