Every time when I create a new OpenEBS volume, and mounting the same on the host/application there is a lost+found directory created.
Is there some way to avoid this and what is need of this?
lost+found directory is created by ext4.
It can be deleted manually, but will get created on the next mount/fsck. In your application yaml,use the following parameter to ignore this:
image: <image_name>
args:
- "--ignore-db-dir"
- "lost+found"
Related
I have a dockercontainer that i build using a dockerfile via a docker-compose. I have a named volume, on the first build, it copies a file into /state/config
all is well, while the container is running, the /state/config receives more data because of a process I have running
the volume is setup like so
volumes:
- config_data:/state/config
on the dockerfile i use the copy like so
COPY --from=builder /src/runner /state/config/runner
So, as I say the first run - when no docker container or volume exists, then the /state/config recevies the "runner" file and also adds data into this same directory while the container is running.
Now I don't wish to destroy the volume, but if i rebuild the container using docker build or docker-compose build --no-cache then the volume stays - which is what i want but the runner is NOT updated.
I even tried to exec into the container and remove runner and then rebuild the container again and now the copying of the file does not even happen.
I wondered why this happening ?
Of course, I think i may have a work around, to place the file inside the docker container using the temporary volumes and not a named volume meaning the next time it is re-created then the file is recopied.
But I am confused why - its happening
Anybody help ?
I am using "wurstmeister/kafka" docker image with latest tag.
Whenever I tried to stop & start the kafka container, it will start container with default configuration.
How can I mount volume, so that data persists even when container stops or automatically restarts.
All data saved in logs file inside provided folder in volume, but when container restarts it doesn't load data from that folder & starts fresh copy.
I have tried following :
volumes:
- /kafka:/kafka-volume
When container restarts, all topics should be persists as it is and with same partitions created earlier.
Any help would be appreciable.
Add this in your compose file
services:
kafka:
volumes:
- type: volume
source: kafkalogs
target: /path/to/folder/on/host
volumes:
kafkalogs:
This might be simple, but I can't seem to figure out why a bash script mounted as a configmap cannot be ran as root:
root#myPodId:/opt/nodejs-app# ls -alh /path/fileName
lrwxrwxrwx 1 root root 18 Sep 10 09:33 /path/fileName -> ..data/fileName
root#myPodId:/opt/nodejs-app# whoami
root
root#myPodId:/opt/nodejs-app# /bin/bash -c /path/fileName
/bin/bash: /path/fileName: Permission denied
I'm guessing, but I'd think that as with Docker, the root in the container isn't the actual root and works more like a pseudo-root account.
If that's the case, and the file cannot be ran this way, how would you include the script without having to re-create the Docker container every time the script changes?
See here: https://github.com/kubernetes/kubernetes/issues/71356#issuecomment-441169334
You need to set the defaultMode on the ConfigMap to the permissions you are asking for:
volumes:
- name: test-script
configMap:
name: test-script
defaultMode: 0777
Alright, so I don't have links to the documentation, however the configmaps are definitely mounted on a ReadOnly filesystem. What I came up with is to cat the content of the file into another file in a location where the local root can write /usr/local in my case and this way the file can be ran.
If anyone comes up with a more clever solution I'll mark it as the correct answer.
It's not surprise you cannot run script which is mounted as ConfigMap. The name of the resource itself (ConfigMap) should have made you to not use it.
As a workaround you can put your script in some git repo, then mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod’s container. InitContainer will download the latest version every time during container creation
I deploy a service on a standard Docker for AWS stack (using this template).
I deploy using docker stack deploy -c docker-compose.yml pos with this compose file:
version: "3.2"
services:
postgres_vanilla:
image: postgres
volumes:
- db-data:/var/lib/postgresql
volumes:
db-data:
driver: "cloudstor:aws"
driver_opts:
size: "6"
ebstype: "gp2"
backing: "relocatable"
I then change some data in the db and force an update of the service with docker service update --force pos_postgres_vanilla
Problem is that the data I change doesn't persist after the update.
I've noticed that postgres initdb script runs every time I update, so I assume it's related.
Is there something i'm doing wrong?
Issue was that cloudstor:aws creates the volume with a lost+found under it, so when postgres starts it finds that the data directory isn't empty and complains about it. To fix that I changed the volume to be mounted one directory above the data directory, at /var/lib/postgresql, but that caused postgres to not find the PGVERSION file, which in turn caused it to run initdb every time the container starts (https://github.com/docker-library/postgres/blob/master/11/docker-entrypoint.sh#L57).
So to work around it, instead of changing the volume to be mounted one directory above the data directory, I changed the data directory to be one level below the volume mount by overriding environment variable PGDATA (to something like /var/lib/postgresql/data/db/).
In order to keep track of the volumes used by docker-compose, I'd like to use named volumes. This works great for 'normal' volumes like
version: 2
services:
example-app:
volume:
-named_vol:/dir/in/container/volume
volumes:
named_vol:
But I can't figure out how to make it work when mounting the local host.
I'm looking for something like:
version: 2
services:
example-app:
volume:
-named_homedir:/dir/in/container/volume
volumes:
named_homedir: /c/Users/
or
version: 2
services:
example-app:
volume:
-/c/Users/:/home/dir/in/container/ --name named_homedir
is this in any way possible or am I stuck with anonymous volumes for mounted ones?
As you can read in this GitHub issue, mounting named volumes now is a thing … since 1.11 or 1.12.). Driver specific options are documented. Some notes from the GitHub thread:
docker volume create --opt type=none --opt device=<host path> --opt o=bind
If the host path does not exist, it will not be created.
Options are passed in literally to the mount syscall. We may add special cases for certain "types" because they are awkward to use... like the nfs example [referenced above].
– #cpuguy83
To address your specific question about how to use that in compose, you write under your volumes section:
my-named-volume:
driver_opts:
type: none
device: /home/full/path #NOTE needs full path (~ doesn't work)
o: bind
This is because as cpuguy83 wrote in the github thread linked, the options are (under the hood) passed directly to the mount command.
EDIT: As commented by…
…#villasv, you can use ${PWD} for relative paths.
…#mikeyjk, you might need to delete preexisting volumes:
docker volume rm $(docker volume ls -q)
OR
docker volume prune
…#Camron Hudson, in case you have no such file or directory errors showing up, you might want to read this SO question/ answer as Docker does not follow symlinks and there might be permission issues with your local file system.
OP appears to be using full paths already, but if like most people you're interested in mounting a project folder inside the container this might help.
This is how to do it with driver_opts like #kaiser said and #linuxbandit exemplified. But you can try to use the usually available environment variable $PWD to avoid specifying full paths for directories in the docker-compose context:
logs-directory:
driver_opts:
type: none
device: ${PWD}/logs
o: bind
I've been trying the (almost) same thing and it seems to work with something like:
version: '2'
services:
example-app:
volume:
-named_vol:/dir/in/container/volume
-/c/Users/:/dir/in/container/volume
volumes:
named_vol:
Seems to work for me (I didn't dig into it, just tested it).
I was looking for an answer to the same question recently and stumbled on this plugin: https://github.com/CWSpear/local-persist
Looks like it allows just what topic started wants to do.
Haven't tried it myself yet, but thought it might be useful for somebody.
Host volumes are different from named volumes or anonymous volumes. Their "name" is the path on the host.
There is no way to use the volumes section for host volumes.