I am getting the following error while taking mongodb backup from docker container using shell script. I am giving the error below.
container name : uBot_sandbox_subhrajp
OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "exec: \"mongodump\": executable file not found in $PATH": unknown
Error: No such container:path: uBot_sandbox_subhrajp:/data/ubot-db-mongobackup_2021_02_09_13-09-58
My shell script is given below.
#!/bin/bash
export DATABASE_NAME="ubot-db"
export BACKUP_LOCATION="/home/UBOT"
export TIMESTAMP=$(date +'%Y_%m_%d_%H-%M-%S')
for NAMES in $(docker ps -a --format "{{.Names}}" --filter "name=uBot_sandbox");do
echo "container name : $NAMES"
docker exec -t ${NAMES} mongodump --out /data/${DATABASE_NAME}-mongobackup_${TIMESTAMP} --db ${DATABASE_NAME}
docker cp ${NAMES}:/data/${DATABASE_NAME}-mongobackup_${TIMESTAMP} ${BACKUP_LOCATION}
done
Here I need to take mongodb backup from all matched containers and copy those into target location. Please help me to resolve this error and execute the code as expected.
Related
I want to setup a cron script that automatically creates dumps from a specific Postgres database running in a Docker container. I know how to execute commands in a container from outside and also am familiar with pg_dump.
Somehow, for my container and database , I can't seem to make it work:
docker exec <container> pg_dump -U postgres <mydb> > /pg-snaps/<mydb>_$(date).sql
I get the following error:
zsh: no such file or directory: /pg-snaps/<mydb>_<date>.sql
The directory /pg-snaps exists. I can execute the same command inside the container, and it works. I have no idea why it doesn't work with docker exec. I looked up the methodology on how to do this, and it suggests the same as I want to do. Adding " " around the command to be executed also results in a 'no such file or directory'.
try this example:
docker exec -it <container> sh -c 'pg_dump -U postgres <mydb> > /pg-snaps/<mydb>_$(date).sql'
I am trying to use the following approach in order to backup PostgreSQL database that is in a Docker container:
cmd /c 'docker exec -t postgres-dev pg_dump dvdrental -U postgres -c -v >
C:\\postgresql\\backup\\dvdrental_%date:~-4,4%.%date:~-7,2%.%date:~-10,2%_%time:~0,2%.%time:~3,2%.sql'
However, it gives the following file that has the size ok 0 KB and with the extension of %tme:
dvdrental_2021.05.24_%tme
I tried some different combinations, but it still the same and I am not able to get backup with the following format:
dvdrental_2021.05.24_15.30
So, how can I get the expected result as mentioned above? I use Windows, but I think it is not important as the command is executed on Docker container.
I have a dump of my production db, which I can restore easily in my docker container with: docker exec -it my_db_container pg_restore --user=my_user --dbname=dbname sql/current.dump. Everything works, data are here.
But when I re-dump my local database from the docker with docker exec -it my_db_container -U my-user -F c -b dbname > docker/db/current_stripped.dump back to project folder, my dump file is created (with appropriate size and content) but I can not use it for restoring (docker exec -it whasq-db pg_restore --user=my-user --dbname=dbname sql/current_stripped.dump) it again to a fresh db due to an error: pg_restore: [custom archiver] could not read from input file: end of file however the restore command is the same (except the my_user which is postgres in production) as used in production env.
I had the same problem and solved it as follows. I switched to the -f or --file option instead of piping (taken from https://stackoverflow.com/a/51073680/6593069). And I created the dump file inside the container and then copied it to my host.
That would mean in your case:
docker exec -it my_db_container -U my-user -F c -b dbname -f current_stripped.dump
docker cp my_db_container:current_stripped.dump .
Hi Stack Overflow community!
I have a maven - java project which needs to be build with jenkins pipelines.
To do so, I've configured the job using the docker image maven:3.3.3. Everything works, except for the fact that I use ru.yandex.qatools.embed:postgresql-embedded. This works locally, but on jenkins it complains about starting Postgres:
2019-02-08 09:31:20.366 WARN 140 --- [ost-startStop-1] r.y.q.embed.postgresql.PostgresProcess: Possibly failed to run initdb:
initdb: cannot be run as root
Please log in (using, e.g., "su") as the (unprivileged) user that will own the server process.
2019-02-08 09:31:40.999 ERROR 140 --- [ost-startStop-1] r.y.q.embed.postgresql.PostgresProcess: Failed to read PID file (File '/var/.../target/database/postmaster.pid' does not exist)
java.io.FileNotFoundException: File '/var/.../target/database/postmaster.pid' does not exist
Apparently, Postgres does not allow to be run with superuser privileges for security reasons.
I've tried to run as a user by creating my own version of the docker-image and adding the following to the DockerFile:
RUN useradd myuser
USER myuser
And this works when I start the docker image from the server's terminal. But by using jenkins pipeline, whoami still prints 'root', which suggests that Jenkins Pipeline uses run -u behind the schemes, which would overrule the DockerFile?
My pipeline job is currently as simple as this:
pipeline {
agent {
docker {
image 'custom-maven:1'
}
}
stages {
stage('Checkout') {
...
}
stage('Build') {
steps {
sh 'whoami'
sh 'mvn clean install'
}
}
}
}
So, my question: How do I start this docker image as a different user? Or switch users before running mvn clean install?
UPDATE:
By adding -u myuser as args in jenkins pipeline, I do log in as the correct user, but then the job can't access the jenkins-log file (and hopefully that's the only problem). The user myuser is added to the group root, but this makes no differece:
agent {
docker {
image 'custom-maven:1'
args '-u myuser'
}
}
And the error:
sh: 1: cannot create /var/.../jenkins-log.txt: Permission denied
sh: 1: cannot create /var/.../jenkins-result.txt.tmp: Permission denied
mv: cannot stat ‘/var/.../jenkins-result.txt.tmp’: No such file or directory
touch: cannot touch ‘/var/.../jenkins-log.txt’: Permission denied
I have solved the issue in our case. What I did was sudo before the mvn command. Keep in mind that every sh step has his own shell, so you need to do sudo in each sh step:
sh 'sudo -u <youruser> mvn <clean or whatever> -f <path/to/pomfile.xml>'
The user must be created in the Dockerfile. I've created it without password, but I don't think it matters since you are root...
You must use sudo instead of switching user or something since otherwise you need to provide a password.
This is far from a clean way... I would suggest to not mess with user-switching unless you really need to (like to run a embedded postgres)
#Thomas Stubbe
I got almost the same error as you mentioned above. I also have an image which has postgresql on it. On the Dockerfile I have created an user named like tdv, and use id command on Container check tdv permission was 1000:1000. I can Start the Container though Jenkins pipeline but it failed to execute sh command, even I add sudo -u tdv for each command. Did you did any other configuration?
My Jenkins Pileline script like following:
pipeline{
agent none
stages{
stage('did operation inside container'){
agent {
docker{
image 'tdv/tdv-test:8.2'
label 'docker_machine'
customWorkspace "/opt/test"
registryUrl 'https://xxxx.xxxx.com'
registryCredentialsId '8269c5cd-321e-4fab-919e-9ddc12b557f3'
args '-u tdv --name tdv-test -w /opt/test -v /opt/test:/opt/test:rw,z -v /opt/test#tmp:/opt/test#tmp:rw,z -p 9400:9400 -p 9401:9401 -p 9402:9402 -p 9403:9403 -p 9407:9407 -p 9303:9303 --cpus=2.000 -m=4g xxx.xxx.com/tdv/tdv-test:8.2 tdv.server'
}
}
steps{
sh 'sudo tdv whoami'
sh 'sudo tdv pwd'
sh 'sudo tdv echo aaa'
}
}
}
}
After the Job run, I can check the Container start up actually. but it still get error like following
$ docker top f1140072d77c5bed3ce43a5ad2ab3c4be24e8c32cf095e83c3fd01a883e67c4e -eo pid,comm
ERROR: The container started but didn't run the expected command. Please double check your ENTRYPOINT does execute the command passed as docker run argument, as required by official docker images (see https://github.com/docker-library/official-images#consistency for entrypoint consistency requirements).
Alternatively you can force image entrypoint to be disabled by adding option `--entrypoint=''`.
[Pipeline] {
[Pipeline] sh
sh: /opt/test#tmp/durable-081990da/jenkins-log.txt: Permission denied
sh: /opt/test#tmp/durable-081990da/jenkins-result.txt.tmp: Permission denied
touch: cannot touch ‘/opt/test#tmp/durable-081990da/jenkins-log.txt’: Permission denied
mv: cannot stat ‘/opt/test#tmp/durable-081990da/jenkins-result.txt.tmp’: No such file or directory
touch: cannot touch ‘/opt/test#tmp/durable-081990da/jenkins-log.txt’: Permission denied
touch: cannot touch ‘/opt/test#tmp/durable-081990da/jenkins-log.txt’: Permission denied
Solution
You currently have sh 'sudo tdv whoami'
I think you should add the -u flag after sudo like that:
sh 'sudo -u tdv whoami'
I use Docker to develop.
docker exec -it <My-Name> mongo
I want to import the data to MongoDB from a JSON file, but it fails.
The command is
mongoimport -d <db-name> -c <c-name> --file xxx.json
What can I do?
With your description it seems that
you have a datafile in json format in your host machine and you want to import this data into mongodb which is running in a container.
You can follow following steps to do so.
#>docker exec -it <container-name> mongo
#>docker cp xxx.json <container-name-or-id>:/tmp/xxx.json
#>docker exec <container-name-or-id> mongoimport -d <db-name> -c <c-name> --file /tmp/xxx.json
In the last step you have to use file path that is available in the container.
To debug more if required you can login into the container and execute the way you do on the Linux machines.
#>docker exec -it <container-name-or-id> sh
sh $>cat /tmp/xxx.json
sh $>mongoimport -d <db-name> -c <c-name> --file /tmp/xxx.json
Run without copy file:
docker exec -i <container-name-or-id> sh -c 'mongoimport -c <c-name> -d <db-name> --drop' < xxx.json
Step 1: Navigate to the directory where the JSON file located from your host terminal.
Step 2: Use command "docker cp xxx.json mongo:/tmp/xxx.json” to copy the JSON file from current host directory to container "tmp" directory.
Step 3: Navigate to container command shell by using command “docker container exec -it mongo bash”.
Step 4: Import the collection from "tmp" folder to container database by using command : "mongoimport --uri="<mongodb connection uri>" —collection=<c-name> --file /tmp/xxx.json"
What we have:
mongodb running in docker container.
json file to import from local machine
mongoimport.exe on local machine
What to do to import this json file as a collection:
mongoimport --uri=<connection-string> --collection=<collection-name> --file=<path-to-file>
Example:
mongoimport --uri="mongodb://localhost:27017/test" --collection=books --file:"C:\Data\books.json"
More details regarding mongoimport here: https://www.mongodb.com/docs/database-tools/mongoimport/