If there is a running docker-compose stack, is it possible to find out what location (aka workdir) was used to launch the containers?
Let's assume the docker-compose.yml was moved or deleted afterward so find isn't reliable.
For context, here's what happens when running docker-compose up -d with the same file from a different location:
Creating redis ...
Creating solr ...
Creating mysql ...
ERROR: for redis-db Cannot create container for service redis-db: Conflict. The container name "/redis" is already in use by container "48b96cac70369b24a31435c34ba34a34306fd395c6412fc71b4f64f79d708b39". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for mysql-db Cannot create container for service mysql-db: Conflict. The container name "/mysql" is already in use by container "cb32c60caed4b225f082a4ddb22f527e815c94a59c5ced32a1b9bb7234aa96cb". You have to remove (or rename) that container to be able to reuse that name.
ERROR: for solr-server Cannot create container for service solr-server: Conflict. The container name "/solr" is already in use by container "91fe8d364f4f42e00d89d69be46ae01315a0753d81dfd9288ee6caa308b4ebf9". You have to remove (or rename) that container to be able to reuse that name.
Encountered errors while bringing up the project.
Related
I have a docker-compose.yml from which I started a couple of services. I add a new volume mapping to one of the services and then try to restart the container with
docker compose restart <service_name>
but the volume is still not mapped and not available from within the image.
What is the right way to add a volume to an image defined with docker compose?
Oki, so it turns out that restart is just a refresh of the existing image but changes nothing in the parameters with which it is started.
In order to have compose take into account volume mapping changes in the docker-compose.yml file one has ro run:
docker compose up --build <service_name>
There might be other solutions, but this is what I ended up doing.
I have a dockercontainer that i build using a dockerfile via a docker-compose. I have a named volume, on the first build, it copies a file into /state/config
all is well, while the container is running, the /state/config receives more data because of a process I have running
the volume is setup like so
volumes:
- config_data:/state/config
on the dockerfile i use the copy like so
COPY --from=builder /src/runner /state/config/runner
So, as I say the first run - when no docker container or volume exists, then the /state/config recevies the "runner" file and also adds data into this same directory while the container is running.
Now I don't wish to destroy the volume, but if i rebuild the container using docker build or docker-compose build --no-cache then the volume stays - which is what i want but the runner is NOT updated.
I even tried to exec into the container and remove runner and then rebuild the container again and now the copying of the file does not even happen.
I wondered why this happening ?
Of course, I think i may have a work around, to place the file inside the docker container using the temporary volumes and not a named volume meaning the next time it is re-created then the file is recopied.
But I am confused why - its happening
Anybody help ?
Saving Data in kubernetes is not persistant. so we should use volume.
Forexample we can mount "/apt" to save data in "apt".
Now I want to mount "/" but I get this error.
Error: Error response from daemon: invalid volume specification:
'/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/':
invalid mount config for type "bind": invalid specification:
destination can't be '/'
The question is How can I mount "/" in kubernetes?
Not completely sure about your environment, but I ran into this issue today because I wanted to be able to browse the entire root filesystem of a container via SSH (WinSCP) to the host. I am using Docker in a Photon OS VM environment. The answer I've come to is: you can't do what you're trying to do, but you may be able to accomplish what you're trying to accomplish. Let's say I created a volume called mysql and I create a new (oversimplified) mysql container using that volume as root:
docker volume create --name mysql
docker run -d --name=mysqldb -v /var/lib/docker/volumes/mysql:/ mysql:5.7
Docker will cry and say I can't mount to root (destination can't be '/'). However, since I know the location where our volumes live (/var/lib/docker/volumes/) then we can simply create our container as normal and an arbitrarily-named volume will be placed in that folder. So if your goal is (as mine was) to be able to ssh to the host and browse the files in the root of your container, you CAN do that, you just need to go to the correct arbitrarily-named volume. In my case it is "/var/lib/docker/volumes/12dccb66f2eeaeefe8e1feabb86f3c6def87b091dabeccad2902851caa97f04c/_data", which isn't as pretty as "/var/lib/docker/volumes/mysql", but it gets the job done.
Hope that helps someone.
I have a container called "postgres", build with plain docker command, that has a configured PostgreSQL inside it. Also, I have a docker-compose setup with two services - "api" and "nginx".
How to add the "postgres" container to my existing docker-compose setup as a service, without rebuilding? The PostgreSQL database is configured manually, and filled with data, so rebuilding is a really, really bad option.
I went through the docker-compose documentation, but found no way to do this without a re-build, sadly.
Unfortunately this is not possible.
You don't refer containers on docker-compose, you use images.
You need to create a volume and/or bind mount it to keep your database data.
This is because containers do not save data, if you have filled it with data and did not make a bind mount or a volume to it, you will lose everything on using docker container stop.
Recommendation:
docker cp
Docker cp will copy the contents from container to host. https://docs.docker.com/engine/reference/commandline/container_cp/
Create a folder to save all your PostgreSQL data (ex: /home/user/postgre_data/)
Save the contents of your PostgreSQL container data to this folder (docker hub postgres page for further reference: ;
Run a new PostgreSQL (same version) container with a bind mount poiting to the new folder;
This will maintain all your data and you will be able to volume or bind mount it to use on docker-compose.
Reference of docker-compose volumes: https://docs.docker.com/compose/compose-file/#volumes
Reference of postgres docker image: https://hub.docker.com/_/postgres/
Reference of volumes and bind mounts: https://docs.docker.com/storage/bind-mounts/#choosing-the--v-or---mount-flag
You can save this container in a new image using docker container commit and use that newly created image in your docker-compose
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
I however prefer creating images with the use of Dockerfiles and scripts to fill my data etc.
i am relatively new to docker. I'd like to set up a postgres database but I wonder how to make sure that the data isn't being lost if I recreated the container.
Then I stumbled over named volumes (not bind volumes) and how to use them.
But... in a Dockerfile you can't use named volumes. E.g. data:/var/lib etc.
As I understood using a Dockerfile it's always an anonymous volume.
So every single time I'd recreate a container it would get its
own new volume.
So here comes my question:
Firstly: how do I make sure, if the container get's updated or recreated that the postgres database from within the new container references to the same data and not losing the reference to the previously created anonymous volume.
Secondly: how does this work with a yml file?
is it possible to reference multiple replicas of such a database container to one volume? (High Availability Mode)?
It would really be great if someone could get me a hint or best practices.
Thank you in advance.
Looking at the Dockerfile for Postgres, you see that it declares a volume instruction:
VOLUME /var/lib/postgresql/data
Everytime you run a new Postgres container, without specifying a --volume option, docker automatically creates a new volume. The volume is given a random name.
You can see all volumes by running the command:
docker volume ls
You can also inspect the files stored on the host by the volume, by inspecting the host path using:
docker volume inspect <volume-name>
So when you don't specify the --volume option for the run command, docker create volumes for all volumes declared in the Dockerfile. This is mainly a safety if you forget to name your volume and the data shouldn't be lost.
Firstly: how do I make sure, if the container get's updated or
recreated that the postgres database from within the new container
references to the same data and not losing the reference to the
previously created anonymous volume.
If you want docker to use the same volume, you need to specify the --volume option. Once specified, docker won't create a new volume and it will simply mount the existing volume onto the specified folder in the docker command.
As a best practice, name your volumes that have valuable data. For example:
docker run --volume postgresData:/var/lib/postgresql/data ...
If you run this command for the first time the volume postgresData will be created and will backup /var/lib/postgresql/data on the host. The second time you run it the same data backed up on the host will be mounted onto the container.
Secondly: how does this work with a yml file? is it possible to
reference multiple replicas of such a database container to one
volume?
Yes, volumes can be shared between multiple containers. You can mount the same volume onto multiple containers, and the containers will use the same files. Docker compose allows you to do that ...
However, beware that volumes are limited to the host they were created. When running containers on multiple machines, the volume needs to be accessible from all the machines. There are ways/tools to achieve that
but they are a bit complex. This is still a limitation to be addressed in Docker.