Cannot connect to volume in Docker (Windows) - postgresql

I am trying to run Postgresql in Docker using this code in a terminal:
`winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13`
and I keep running into this error: Error response from daemon: The system cannot find the file specified.
I have looked up this error and the solutions I see online (such as restarting Docker Desktop, reinstalling Docker, updating Docker) did not work for me.
I think the issue is with the volume part (designated by -v) because when I remove it, it works just fine. However, I want to be able to store the contents in a volume permanently, so running it without the -v is not a long-term solution.
Has anyone run into a similar issue before?

Check if you can access to this path in host.
dir C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data
check if you can access on volume inside a container.
winpty docker run -v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/data alpine ls /data

Related

create postgres db contaner in local docker on mac

I and new to use docker, I'm so confused about creating postgres container in docker.
what is -v /data:/var/lib/postgresql/data in the command line below, which is for creating a container in docker? Is it for setting the volume? Can I change the path since I cannot find the postgresql in /lib, and so I cannot find the file path when I want add the file permission in file sharing in docker setting.
sudo docker run -d --name mybd --network mydb-network -p 5432:5432 -v /data:/var/lib/postgresql/data -e POSTGRES_PASSWORD=mydb -e PGDATA=/var/lib/postgresql/data/pgdata postgres
running container error
when I tried to run container in docker, it showed this error. Then I went to docker setting and wanted add /data path to file sharing, however, I cannot find the /date path.

I cannot run more than 2 containers in Docker

I'm running docker for a project on Github. (https://github.com/mydexchain/mydexchain)
I'm running the code below and creating an image file and a container.
docker run -d --rm -p 2020:2020 -v mydexchain:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain mydexchain/mydexchain:latest
I set my tracker address on port 2020.
I run the container with the attach command. "docker attach mydexchain"
I'm running the code below and creating a second container.
docker run -d --rm -p 2021:2020 -v mydexchain1:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain1 mydexchain/mydexchain:latest
I set my tracker address on port 2021.
I run the container with the attach command. "docker attach mydexchain1"
-So far everything is normal-
I'm running the code below and creating a third container.
docker run -d --rm -p 2022:2020 -v mydexchain2:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain2 mydexchain/mydexchain:latest
I'm checking the containers with the docker ps command.
I see this screen.
As soon as I want to do anything with this container like attach or set tracker, the container disappears. Like This.
When I check the logs;
When I did these procedures for the first time, I did not encounter any errors.
I would be glad if you could help. I have been working for 1 week and could not find any solution.
Have you ensured that all volumes are clean and similar? Usually the path /var/lib/postgresql/data is mounted into container.
Might be also related to your pausing/closing your previous attach command which kills that container, since you are missing -i and -t flags when launching the container. Those should be used to prevent it from closing. See more from the documentation of attach command.

What is the best way to install tensorflow and mongodb in docker?

I want to create a docker container or image and have tensorflow and mongodb installed, I have seen that there are docker images for each application, but I need them to be working together, from a mongodb database I must extract the data to feed a model created in tensorflow.
Then I want to know if it is possible to have a configuration like that, since I have tried with a ubuntu container and inside it to install the applications I need, but I don't know if there is another way to do it.
Thanks.
Interesting that I find this post, and just found one solution for myself. Maybe not the one for you, BTW.
What I did is: docker pull mongo and run as daemon:
#!/bin/bash
export VOLUME='/home/user/code'
docker run -itd \
--name mongodb \
--publish 27017:27017 \
--volume ${VOLUME}:/code \
mongo
Here
the 'd' in '-itd' means running as daemon (like service, not
interactive).
The --volume may not be used.
Then docker pull tensorflow/tensorflow and run it with:
#!/bin/bash
export VOLUME='/home/user/code'
docker run \
-u 1000:1000 \
-it --rm \
--name tensorflow \
--volume ${VOLUME}:/code \
-w /code \
-e HOME=/code/tf_mongodb \
tensorflow/tensorflow bash
Here
the -u make docker bash with same ownership as host machine;
the --volume make host folder /home/user/code mapping to /code in docker;
the -w work make docker bash start from /code, which is /home/user/code in host;
the -e HOME= option sign bash $HOME folder such that later you can pip install.
Now you have bash prompt such that you can
create virtual env folder under /code (which is mapping to /home/user/code),
activate venv,
pip install pymongo,
then you can connect to mongodb you run in docker (localhost may not work, please use host IP address).

PostgreSQL docker container not writing data to disk

I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres

xhost command for docker GUI apps (Eclipse)

I'm looking at running a GUI app in docker. I've heard that this is incurs security problems due to the Xserver being exposed. I'd like to know what is being done in each of the following steps, specifically the xhost local:root:
[ -d ~/workspace ] || mkdir ~/workspace
xhost local:root
docker run -i --net=host --rm -e DISPLAY -v $HOME/workspace/:/workspace/:z docbill/ubuntu-umake-eclipse
[ -d ~/workspace ] || mkdir ~/workspace
This creates a workspace directory in your home directory if it doesn't already exist.
xhost local:root
This permits the root user on the local machine to connect to X windows display.
docker run -i --net=host --rm -e DISPLAY -v $HOME/workspace/:/workspace/:z docbill/ubuntu-umake-eclipse
This runs a container with the following options:
-i: interactive, input typed after this command is run is received by the process launched inside the container.
--net=host: host networking, the container is not launched with an isolated network stack. Instead, all networking interfaces of the host are directly accessible inside the container.
--rm automatically cleanup the container on exit. Otherwise the container will remain in a stopped state.
-e DISPLAY pass through the DISPLAY environment variable from the host into the container. This tells GUI programs where to send their output.
-v $HOME/workspace/:/workspace/:z map the workspace folder from your home directory on the host to the /workspace folder inside the container with selinux sharing settings enabled.
docbill/ubuntu-umake-eclipse run this image, authored by user docbill on the docker hub (anyone is able to create an account here). This is not an official image from docker but a community submitted image.
From the options, this command is most likely designed for users running on RHEL or CentOS Docker host. It will not work on Docker for Windows or Docker for Mac, but should work on other variants of Linux.
I've used similar commands to run my containers with a GUI, but without the xhost and host networking. Instead, I've just mapped in the X windows socket (/tmp/.X11-unix) directly to the container:
docker run -it --rm -e DISPLAY -u `id -u` \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v /etc/localtime:/etc/localtime:ro \
my_gui_image