Singularity exec - echo redirect issue - echo

I am running a singularity container with ubuntu xenial base.
When I attempt to create a text file by using redirect from echo command to the file system the target of the redirect is interpreted to be on the host instead of on the container.
Below is the command -
singualrity exec ubuntu_xenial_image.img echo "test" >> /mnt/test.txt
Instead of creating the file test.txt in the container folder named /mnt, it tries to write the test.txt file to the host root folder /mnt/test.txt resulting in a - no permissions error as obviously I don't have permission to write to the host root folder.
Do you know why the redirect goes to the host file system rather than the container file system as the singularity exec command is supposed to work?

The full command, as it is written is divided into
singularity exec ubuntu_xenial_image.img echo "test"
for the container and the output is redirected to >> /mnt/test.txt in the host filesystem.
to correct it
$ singularity exec ubuntu_xenial_image.img sh -c "echo" test ">> /mnt/test.txt"
thus the complete command will be interpreted by sh inside the container.
In addition to this, you need to verify the write permissions of the /mnt directory, or execute with sudo.

Related

Locust not generating failures.csv and expections.csv in distributed mode without UI in docker

When testing API using locust distributed mode without UI in docker. The distribution.csv, requests.csv are getting generated but the failures.csv and expection.csv are not getting generated but the requests.csv show failures as given below.
"Method","Name","# requests","# failures","Median response time","Average response time","Min response time","Max response time","Average Content Size","Requests/s"
"POST","/api/something/something",197009,56,470,559,78,156714,1,436.31
Can you please help.
The problem is that file need to be written to a folder that it has permission to, and a volume that is mounted to your host. If you add a mounted folder before the file name, it should work. For example:
Docker file:
# Set base image
FROM locustio/locust
ADD locustfile.py locustfile.py
Docker create Command:
docker build -t mykey/myimage:1.0 .
Docker run command (on Windows, replace with %CD% with $pwd on linux):
docker run --volume "%CD%:/mnt/locust" -e LOCUSTFILE_PATH=/mnt/locust/locustfile.py -e TARGET_URL=https://example.com -e LOCUST_OPTS="--clients=10 --no-web --run-time=600 --csv=/mnt/locust/output" mykey/myimage:1.0
The files will now write to the same folder where locustfile.py is located.

Replacing postgresql.conf in a docker container

I am pulling the postgres:12.0-alpine docker image to build my database. My intention is to replace the postgresql.conf file in the container to reflect the changes I want (changing data directory, modify backup options etc). I am trying with the following docker file
FROM postgres:12.0-alpine
# create the custom user
RUN addgroup -S custom && adduser -S custom_admin -G custom
# create the appropriate directories
ENV APP_HOME=/home/data
ENV APP_SETTINGS=/var/lib/postgresql/data
WORKDIR $APP_HOME
# copy entrypoint.sh
COPY ./entrypoint.sh $APP_HOME/entrypoint.sh
# copy postgresql.conf
COPY ./postgresql.conf $APP_HOME/postgresql.conf
RUN chmod +x /home/data/entrypoint.sh
# chown all the files to the app user
RUN chown -R custom_admin:custom $APP_HOME
RUN chown -R custom_admin:custom $APP_SETTINGS
# change to the app user
USER custom_admin
# run entrypoint.sh
ENTRYPOINT ["/home/data/entrypoint.sh"]
CMD ["custom_admin"]
my entrypoint.sh looks like
#!/bin/sh
rm /var/lib/postgresql/data/postgresql.conf
cp ./postgresql.conf /var/lib/postgresql/data/postgresql.conf
echo "replaced .conf file"
exec "$#"
But I get an exec error saying 'custom_admin: not found on the 'exec "$#"' line. What am I missing here?
In order to provide a custom configuration. Please use the below command:
docker run -d --name some-postgres -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf postgres -c 'config_file=/etc/postgresql/postgresql.conf'
Here my-postgres.conf is your custom configuration file.
Refer the docker hub page for more information about the postgres image.
Better to use the suggested answer by #Thilak, you do not need custom image to just use the custom config.
Now the problem with the CMD ["custom_admin"] in the Dockerfile, any command that is passed to CMD in the Dockerfile, you are executing that command in the end of the entrypoint, normally such command refers to main or long-running process of the container. Where custom_admin seems like a user, not a command. you need to replace this with the process which would run as a main process of the container.
Change CMD to
CMD ["postgres"]
I would suggest modifying the offical entrypoint which do many tasks out of the box, as you own entrypoint just starting the container no DB initialization etc.

Kubernetes kubectl copy command failing

I have a pod running python image as 199 user. My code app.py is place in /tmp/ directory, Now when I run copy command to replace the running app.py then the command simply fails with file exists error.
Please try to use the --no-preserve=true flag with kubectl cp command. It will pass --no-same-owner and --no-same-permissions flags to the tar utility while extracting the copied file in the container.
GNU tar manual suggests to use --skip-old-files or --overwrite flag to tar --extract command, to avoid error message you encountered, but to my knowledge, there is no way to add this optional argument to kubectl cp.

Copy folder with wildcard from docker container to host

Creating a backup script to dump mongodb inside a container, I need to copy the folder outside the container, Docker cp doesn't seem to work with wildcards :
docker cp mongodb:mongo_dump_* .
The following is thrown in the terminal :
Error response from daemon: lstat /var/lib/docker/aufs/mnt/SomeHash/mongo_dump_*: no such file
or directory
Is there any workaround to use wildcards with cp command ?
I had a similar problem, and had to solve it in two steps:
$ docker exec <id> bash -c "mkdir -p /extract; cp -f /path/to/fileset* /extract"
$ docker cp <id>:/extract/. .
It seems there is no way yet to use wildcards with the docker cp command https://github.com/docker/docker/issues/7710.
You can create the mongo dump files into a folder inside the container and then copy the folder, as detailed on the other answer here.
If you have a large dataset and/or need to do the operation often, the best way to handle that is to use docker volumes, so you can directly access the files from the container into your host folder without using any other command: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Today I have faced the same problem. And solved it like:
docker exec container /bin/sh -c 'tar -cf - /some/path/*' | tar -xvf -
Hope, this will help.

Upstart / init script not working

I'm trying to create a service / script to automatically start and controll my nodejs server, but it doesnt seem to work at all.
First of all, I used this source as main reference http://kvz.io/blog/2009/12/15/run-nodejs-as-a-service-on-ubuntu-karmic/
After testing around, I minimzed the content of the actual file to avoid any kind of error, resulting in this (the bare minimum, but it doesnt work)
description "server"
author "blah"
start on started mountall
stop on shutdown
respawn
respawn limit 99 5
script
export HOME="/var/www"
exec nodejs /var/www/server/server.js >> /var/log/node.log 2>&1
end script
The file is saved in /etc/init/server.conf
when trying to start the script (as root, or normal user), I get:
root#iof304:/etc/init# start server
start: Job failed to start
Then, I tried to check my syntax with init-checkconf, resulting in:
$ init-checkconf /etc/init/server.conf
File /etc/init/server.conf: syntax ok
I tried different other things, like initctl reload-configuration with no result.
What can I do? How can I get this to work? It can't be that hard, right?
This is what our typical startup script looks like. As you can see we're running our node processes as user nodejs. We're also using the pre-start script to make sure all of the log file directories and .tmp directories are created with the right permissions.
#!upstart
description "grabagadget node.js server"
author "Jeffrey Van Alstine"
start on started mysql
stop on shutdown
respawn
script
export HOME="/home/nodejs"
exec start-stop-daemon --start --chuid nodejs --make-pidfile --pidfile /var/run/nodejs/grabagadget.pid --startas /usr/bin/node -- /var/nodejs/grabagadget/app.js --environment production >> /var/log/nodejs/grabagadget.log 2>&1
end script
pre-start script
mkdir -p /var/log/nodejs
chown nodejs:root /var/log/nodejs
mkdir -p /var/run/nodejs
mkdir -p /var/nodejs/grabagadget/.tmp
# Git likes to reset permissions on this file, but it really needs to be writable on server start
chown nodejs:root /var/nodejs/grabagadget/views/layout.ejs
chown -R nodejs:root /var/nodejs/grabagadget/.tmp
# Date format same as (new Date()).toISOString() for consistency
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Starting" >> /var/log/nodejs/grabagadget.log
end script
pre-stop script
rm /var/run/nodejs/grabagadget.pid
sudo -u nodejs echo "[`date -u +%Y-%m-%dT%T.%3NZ`] (sys) Stopping" >> /var/log/nodejs/grabgadget.log
end script
As of Ubuntu 15, upstart is no longer being used, see systemd.