I am using Prisma 1.9 with Postgres.
How can I reset everything? I have tried prisma local nuke but this command will add MySQL container (somehow) and then it throws error that port 4466 is in use...
I have also tried to recreate all docker containers and images, but that didnt help as well.
So, what's the right way? prisma reset only deletes the data but I want delete the schema as well.. I want to wipe it all.
Thanks for asking this one. Meet my two favorite commands:-
armaggedon - remove everything and images
docker-wipe - remove everything but not images
Keep them in your environment via the respective profile file.
removecontainers() {
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
}
armaggedon() {
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
}
docker-wipe() {
docker kill $(docker ps -aq)
docker rm $(docker ps -aq)
}
Related
I want to to build a PostgreSQL image that only contains some extra .sql files to be executed at starting time
Dockerfile:
FROM postgres:11.9-alpine
USER postgres
WORKDIR /
COPY ddl/*.sql /docker-entrypoint-initdb.d/
Then I build the image:
docker build -t my-postgres:1.0.0 -f Dockerfile .
And run the container
docker run -d --name my-database \
-e POSTGRES_PASSWORD=abc123 \
-p 5432:5432 \
my-postgres:1.0.0
The output of it is the container id
33ed596792a80fc08f37c7c0ab16f8827191726b8e07d68ce03b2b5736a6fa4e
Checking the running containers returns nothing:
Docker container ls
But if I explicitly start it, it works
docker start my-postgres
In the original PostgreSQL image the Docker run command already starts the database. Why after building my own image it doesn't?
It turned out that one of the copied .sql files was failing to execute and, based on this documentation, it makes the entrypoint script to exit. Fixing the SQL solved the issue and the container started normally with Docker run
I started a MongoDB container like so:
docker run -d -p 27017:27017 --net=cdt-net --name cdt-mongo mongo
I saw that my MongoDB container exited:
0e35cf68a29c mongo "docker-entrypoint.s…" Less than a second ago Exited (1) 3 seconds ago cdt-mongo
I checked my Docker logs, I see:
$ docker logs 0e35cf68a29c
about to fork child process, waiting until server is ready for connections.
forked process: 21
2018-01-12T23:42:03.413+0000 I CONTROL [main] ***** SERVER RESTARTED *****
2018-01-12T23:42:03.417+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/tmp.aLmNg7ilAm: No space left on device
ERROR: child process failed, exited with error number 1
Does anyone know what this error is about? Not enough space in the container?
I had to delete old Docker images to free up space, here are the commands I used:
# remove all unused / orphaned images
echo -e "Removing unused images..."
docker rmi -f $(docker images --no-trunc | grep "<none>" | awk "{print \$3}") 2>&1 | cat;
echo -e "Done removing unused images"
# clean up stuff -> using these instructions https://lebkowski.name/docker-volumes/
echo -e "Cleaning up old containers..."
docker ps --filter status=dead --filter status=exited -aq | xargs docker rm -v 2>&1 | cat;
echo -e "Cleaning up old volumes..."
docker volume ls -qf dangling=true | xargs docker volume rm 2>&1 | cat;
We've experienced this problem recently while using docker-compose with mongo and a bunch of other services. There are two fixes which have worked for us.
Clear down unused stuff
# close down all services
docker-compose down
# clear unused docker images
docker system prune
# press y
Increase the image memory available to docker - this will depend on your installation of docker. On Mac, for example, it defaults to 64Gb and we doubled it to 128Gb via the UI.
We've had this problem in both Windows and Mac and the above fixed it.
I have created a docker volume for postgres on my local machine.
docker create volume postgres-data
Then I used this volume and run a docker.
docker run -it -v postgres-data:/var/lib/postgresql/9.6/main postgres
After that I did some database operations which got stored automatically in postgres-data. Now I want to copy that volume from my local machine to another remote machine. How to do the same.
Note - Database size is very large
If the second machine has SSH enabled you can use an Alpine container on the first machine to map the volume, bundle it up and send it to the second machine.
That would look like this:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c \
"cd /from ; tar -cf - . " | \
ssh <TARGET_HOST> \
'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - "'
You will need to change:
SOURCE_DATA_VOLUME_NAME
TARGET_HOST
TARGET_DATA_VOLUME_NAME
Or, you could try using this helper script https://github.com/gdiepen/docker-convenience-scripts
Hope this helps.
I had an exact same problem but in my case, both volumes were in separate VPCs and couldn't expose SSH to outside world. I ended up creating dvsync which uses ngrok to create a tunnel between them and then use rsync over SSH to copy the data. In your case you could start the dvsync-server on your machine:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=postgres-data,target=/data,readonly \
quay.io/suda/dvsync-server
and then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The NGROK_AUTHTOKEN can be found in ngrok dashboard and the DVSYNC_TOKEN is being shown by the dvsync-server in its stdout.
Once the synchronization is done, the dvsync-client container will stop.
Creating a backup script to dump mongodb inside a container, I need to copy the folder outside the container, Docker cp doesn't seem to work with wildcards :
docker cp mongodb:mongo_dump_* .
The following is thrown in the terminal :
Error response from daemon: lstat /var/lib/docker/aufs/mnt/SomeHash/mongo_dump_*: no such file
or directory
Is there any workaround to use wildcards with cp command ?
I had a similar problem, and had to solve it in two steps:
$ docker exec <id> bash -c "mkdir -p /extract; cp -f /path/to/fileset* /extract"
$ docker cp <id>:/extract/. .
It seems there is no way yet to use wildcards with the docker cp command https://github.com/docker/docker/issues/7710.
You can create the mongo dump files into a folder inside the container and then copy the folder, as detailed on the other answer here.
If you have a large dataset and/or need to do the operation often, the best way to handle that is to use docker volumes, so you can directly access the files from the container into your host folder without using any other command: https://docs.docker.com/engine/userguide/containers/dockervolumes/
Today I have faced the same problem. And solved it like:
docker exec container /bin/sh -c 'tar -cf - /some/path/*' | tar -xvf -
Hope, this will help.
I am trying to start a Mongo container using shared folders on Windows using Boot2Docker. When starting using run -it -v /c/Users/310145787/Desktop/mongo:/data/db mongo i get a warning message inside the container saying:
WARNING: This file system is not supported.
After starting mongo shutsdown immediately.
Any hints or tips on how to solve this?
Apparently, according to this gist and Sev (sevastos), mongo doesn't support mounted volume through the VirtualBox shared folder:
See mongoDB Productions Notes:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this operation.
the easiest solutions of all and a proper way for data persistance is Data Volumes:
Assuming you have a container that has VOLUME ["/data"]
# Create a data volume
docker create -v /data --name yourData busybox true
# and use
docker run --volumes-from yourData ...
This isn't always ideal (but the following is for Mac, by Edward Chu (chuyik)):
I don't think it's a good solution, because the data just moved to another container right?
But it still inside the container rather than local system(mac disk).
I found another solution, that is to use sshfs to map data between boot2docker vm and your mac, which may be better since data is not stored inside linux container.
Create a directory to store data inside boot2docker:
boot2docker ssh
mkdir -p /mnt/sda1/dev
Use sshfs to link boot2docker and mac:
echo tcuser | sshfs docker#localhost:/mnt/sda1/dev <your mac dir path> -p 2022 -o password_stdin
Run image with mongo installed:
docker run -v /mnt/sda1/dev:/data/db <mongodb-image> mongod
The corresponding boot2docker issue points out to docker issue 12590 ( Problem with -v shared folders in 1.6 #12590), which points to the work around of using double-slash.
using a double slash seems to work. I checked it locally and it works.
docker run -d -v //c/Users/marco/Desktop/data:/data <image name>
it also works with
docker run -v /$(pwd):/data
As an workaround I just copy from a folder before mongo deamon starts. Also, in my case I don't care of journal files, so i only copy database files.
I've used this command on my docker-compose.yml
command: bash -c "(rm /data/db/*.lock && cd /prev && cp *.* /data/db) && mongod"
And everytime before stoping the container I use:
docker exec <container_name> bash -c 'cd /data/db && cp $(ls *.* | grep -v *.lock) /prev'
Note: /prev is set as a volume. path/to/your/prev:/prev
Another workaround is to use mongodump and mongorestore.
in docker-compose.yml: command: bash -c "(sleep 30; mongorestore
--quiet) & mongod"
in terminal: docker exec <container_name> mongodump
Note: I use sleep because I want to make sure that mongo started, and it takes a while.
I know this involves manual work etc, but I am happy that at least I got mongo with existing data running on my Windows 10 machine, and still can work on my Macbook when I want.
It's seems like you don't need the data directory for MongoDb, removing those lines from your docker-composer.yml should run without problems.
The data directory is only used by mongo for cache.