I am using "wurstmeister/kafka" docker image with latest tag.
Whenever I tried to stop & start the kafka container, it will start container with default configuration.
How can I mount volume, so that data persists even when container stops or automatically restarts.
All data saved in logs file inside provided folder in volume, but when container restarts it doesn't load data from that folder & starts fresh copy.
I have tried following :
volumes:
- /kafka:/kafka-volume
When container restarts, all topics should be persists as it is and with same partitions created earlier.
Any help would be appreciable.
Add this in your compose file
services:
kafka:
volumes:
- type: volume
source: kafkalogs
target: /path/to/folder/on/host
volumes:
kafkalogs:
Related
Here is docker-componse.yml
version: '3.7'
services:
mongo:
container_name: mycars-mongo-container
image: mongo:latest
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: r00tp455w0rd
volumes:
- ./docker-entrypoint-initdb.d/mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
here is mongo-init.js
print('Database setup start...');
db = db.getSiblingDB('mycars');
db.createUser({
user: 'db_user_mycars',
pwd: 'Mycars.123M',
roles: [{ role: 'readWrite', db: 'Mycars' }],
});
db.createCollection('users');
db.createCollection('vehicles');
print('Database setup end.');
When I run a container using docker-compose up a container runs and I can connect to mycars db using db_user_mycars and life is good!
when I commit the container and create an image using docker commit [container name] [image name] it creates the image fine.
Now when I kill my running container and create a new one from the image I get this error:
docker: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay2/4f2db1931d06e1e46ba27842c780059f7dc936252cbd23cc02260c9b2d295ce4/merged/docker-entrypoint-initdb.d/mongo-init.js
any idea why that might be? What I want to achieve is I want to put this image on AWS ECR and whenever a container is created I want the db_user_mycars be created in that container.
If you don't run docker commit you won't have this problem.
The container you're running has three things in it: the code and base OS from the mongo image; the database data in a hidden anonymous volume; and the config file you're bind-mounting. If you docker-compose down this container and docker-compose up to bring it up again, you will get a basically identical container.
This is normal Docker behavior: you should always be able to delete and recreate a container and not lose state, possibly requiring you to mount some sort of storage for persistent state. You should never run docker commit. This gets you a "golden" image that you can't easily recreate or update. And to retiterate, in this state where you're running a Docker Hub image with mounted configuration, there's no particular need to.
Probably what's actually happening here is that Docker needs to make some change in the container filesystem to record that a file will be mounted in mongo-init.js (normally Linux filesystem mounts are directories). When you commit the container and try to relaunch it, it has some sort of record that something should be mounted there, and the actual volume mount conflicts with it.
You only have this problem because you're running docker commit. If you don't docker commit, you can run your setup reproducibly with your config file mounted into the unmodified mongo image.
I have a dockercontainer that i build using a dockerfile via a docker-compose. I have a named volume, on the first build, it copies a file into /state/config
all is well, while the container is running, the /state/config receives more data because of a process I have running
the volume is setup like so
volumes:
- config_data:/state/config
on the dockerfile i use the copy like so
COPY --from=builder /src/runner /state/config/runner
So, as I say the first run - when no docker container or volume exists, then the /state/config recevies the "runner" file and also adds data into this same directory while the container is running.
Now I don't wish to destroy the volume, but if i rebuild the container using docker build or docker-compose build --no-cache then the volume stays - which is what i want but the runner is NOT updated.
I even tried to exec into the container and remove runner and then rebuild the container again and now the copying of the file does not even happen.
I wondered why this happening ?
Of course, I think i may have a work around, to place the file inside the docker container using the temporary volumes and not a named volume meaning the next time it is re-created then the file is recopied.
But I am confused why - its happening
Anybody help ?
I have two containers: A and B. Container B needs to be restarted each time container A is recreated to pick up that container's new id.
How can this be accomplished without hackery?
Not something I've tried to do before, but .. the docker daemon emits events when certain things happen. You can see some of these at https://docs.docker.com/engine/reference/commandline/events/#parent-command but, for example:
Docker containers report the following events:
attach
commit
copy
create
destroy
detach
die
exec_create
exec_detach
exec_start
export
health_status
kill
oom
pause
rename
resize
restart
start
stop
top
unpause
update
By default, on a single docker host, you can talk to the daemon through a unix socket /var/run/docker.sock. You can also bind that unix socket into a container so that you can catch events from inside a container. Here's a simple docker-compose.yml that does this:
version: '3.2'
services:
container_a:
image: nginx
container_name: container_a
container_b:
image: docker
container_name: container_b
command: docker events
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Start this stack with docker-compose up -d. Then, in one terminal, run docker logs -f container_b. In another terminal, run docker restart container_a and you'll see some events in the log window that show the container restarting. Your application can catch those events using a docker client library and then either terminate itself and wait to be restarted, or somehow otherwise arrange restart or reconfiguration.
Note that these events will actually tell you the new container's ID, so maybe you don't even need to restart?
I deploy a service on a standard Docker for AWS stack (using this template).
I deploy using docker stack deploy -c docker-compose.yml pos with this compose file:
version: "3.2"
services:
postgres_vanilla:
image: postgres
volumes:
- db-data:/var/lib/postgresql
volumes:
db-data:
driver: "cloudstor:aws"
driver_opts:
size: "6"
ebstype: "gp2"
backing: "relocatable"
I then change some data in the db and force an update of the service with docker service update --force pos_postgres_vanilla
Problem is that the data I change doesn't persist after the update.
I've noticed that postgres initdb script runs every time I update, so I assume it's related.
Is there something i'm doing wrong?
Issue was that cloudstor:aws creates the volume with a lost+found under it, so when postgres starts it finds that the data directory isn't empty and complains about it. To fix that I changed the volume to be mounted one directory above the data directory, at /var/lib/postgresql, but that caused postgres to not find the PGVERSION file, which in turn caused it to run initdb every time the container starts (https://github.com/docker-library/postgres/blob/master/11/docker-entrypoint.sh#L57).
So to work around it, instead of changing the volume to be mounted one directory above the data directory, I changed the data directory to be one level below the volume mount by overriding environment variable PGDATA (to something like /var/lib/postgresql/data/db/).
I had assumed that docker-compose volumes are mounted before the container's service is started. Is this the case?
I ask, since I've got a docker-compose.yml that, amongst other things, fires up a parse-server container, and mounts a host directory in the container, which contains some code the service should run (Cloud Code).
I can see the directory is mounted correctly after docker-compose up by shelling into the container; the expected files are in the expected place. But the parse-server instance doesn't trigger the code in the mounted directory (I checked it by adding some garbage; no errors).
Is it possible the volume is being mounted after the parse-server service starts?
This is my docker-compose.yml:
version: "3"
volumes:
myappdbdata:
myappconfigdata:
services:
# MongoDB
myappdb:
image: mongo:3.0.8
volumes:
- myappdbdata:/data/db
- myappconfigdata:/data/configdb
# Parse Server
myapp-parse-server:
image: parseplatform/parse-server:2.7.2
environment:
- PARSE_SERVER_MASTER_KEY=someString
- PARSE_SERVER_APPLICATION_ID=myapp
- VERBOSE=1
- PARSE_SERVER_DATABASE_URI=mongodb://myappdb:27017/dev
- PARSE_SERVER_URL=http://myapp-parse-server:1337/parse
- PARSE_SERVER_CLOUD_CODE_MAIN = /parse-server/cloud/
depends_on:
- myappdb
ports:
- 5000:1337
volumes:
- ./cloud:/parse-server/cloud
I'm not sure of the response as I can't find this information in the docs. But I had problems with volumes when I needed them mounted before the container was really running. Sometimes the configuration files were not loaded for example.
The only way to deal with it, is to create a Dockerfile, copy what you want, and use this image for your container.
Hth.
Sadly enough, the biggest issue here was whitespace
PARSE_SERVER_CLOUD_CODE_MAIN = /parse-server/cloud/
should have been
PARSE_SERVER_CLOUD=/parse-server/cloud/
Used 1.5 days chasing this; fml.