Is it possible to mount multiple volumes when starting minikube? - kubernetes

I tried this but didn't work:
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user/app1:/minikube-host/app1 \
--mount-string /home/user/app2:/minikube-host/app2
but only /home/user/app2 was mounted.

You can run multiple mount commands after starting your minikube to mount the different folders:
minikube mount /home/user/app1:/minikube-host/app1
minikube mount /home/user/app2:/minikube-host/app2
This will mount multiple folders in minikube .

There is no need for multiple volumes when starting in your case.
Also, minikube mount after start needs a terminal in running state(opened always).
You can mount /home/user -> /minikube-host. All the folders inside /home/user will be inside VM at /minikube-host.
/home/user/app1 will be available inside VM as
/minikube-host/app1
/home/user/app2 will be available inside VM as
/minikube-host/app2
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user:/minikube-host
Hope this helps !

Currently there is no way. Even using "minikube mount" you need to run each command in separate terminal, what is completely unusable

Related

kubernetes - k3d how to use local directory as a Persistent volume

I am using k3d to run local kubernetes
I have created a cluster using k3d.
Now I want to mount a local directory as a persistent volume.
How can i do this while using k3d.
I know in minikube
$ minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount
Then If you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Is there any similar technique here also while using k3d
According to the answers to this Github question the teature you're looking for is not available yet.
Here is some idea from this link:
The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do k3d cluster create -v "$HOME/git:/git#agent:*" to get all the repositories on my host present in all agent nodes to be used for hot-reloading.
According to this documentation one can use the following command with the adequate flag:
k3d cluster create NAME -v [SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
This command mounts volumes into the nodes
(Format:[SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
Example:
`k3d cluster create --agents 2 -v /my/path#agent:0,1 -v /tmp/test:/tmp/other#server:0`
Here is also an interesting article how volumes and storage work in a K3s cluster (with examples).
I think this feature is not yet available
https://github.com/k3d-io/k3d/issues/566
So far we can only mount volumn when we create a new cluster.
k3d cluster create mykube --volume HOME/go/src/github.com/nginx:/data

How to Mount Multiple CephFS on Client-Node?

I'd Created three CephFS and try to Mount it on Client node but didn't find any way to mount specific one Cephfs. I'd tried
mount -t ceph mon-node:/ /mnt/apachefs/ -o mds_namespace=webfs,secret=ceph-authtool -p /etc/ceph/ceph.client.admin.keyring
But it fails, Is there any other way to Mount Multiple File systems on Client node with use of kernel Driver, mount.ceph or ceph-fuse?
It is possible to specify multiple CephFS by following options.
-o mds_namespace ... kernel Driver (mount -t ceph)
--client_mds_namespace ... ceph fuse (cephf-fuse)
I am pretty sure that -o mds_namespace did not work due to old kernel version. If you are using CentOS7, please test it with ceph-fuse 12.2.4 or later version with (--client_mds_namespace). It worked fine on my env.
If you using Debian based system, you can install ceph-fs-common package with apt, like: apt-get install -y ceph-fs-common.
ceph fs volume create nextcloud [<placement>]
ceph fs volume create okd-admin [<placement>]
#/etc/fstab
### one
10.10.20.6:6789:/folder1 /USERDATA ceph name=admin,secretfile=/etc/ceph/secret.key,fs=nextcloud,noatime,_netdev 0 2
### two
10.10.20.5:6789:/folder2 /mnt/cephfs ceph name=okd-admin,secretfile=/etc/ceph/secret-openshift.key,fs=openshift,noatime,_netdev 0 2

Is there a way to choose the path of named volumes with the default volume drives in docker?

Is there a way to choose the path of named volumes with the default volume drives in docker?
I know I can bind mount volumes in each service. I know I can create named volumes outside of services and them share them by mounting them in each service. But I can't find out a way to get the data (to be shared) on a path that I select instead of docker's /var/lib/docker/volumes/.
Anyone tried to share a volume as a mount point in two different containers of the same docker-compose file where the volume is a location at your choosing?
You cannot do that with the built-in docker volume plugin. You need to install a thirdparty plugin like below
https://github.com/CWSpear/local-persist
curl -fsSL https://raw.githubusercontent.com/CWSpear/local-persist/master/scripts/install.sh | sudo bash
docker volume create -d local-persist -o mountpoint=/data/images --name=images
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two

Minikube mount fails with input/output error

Kinda what it says on the tin. I try doing minikube mount /some/dir:/home/docker/other_dir &, and it fails with the following error:
Mounting /some/dir into /home/docker/other_dir on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
ssh command error:
command :
sudo mkdir -p /home/docker/other_dir || true;
sudo mount -t 9p -o trans=tcp,port=38902,dfltuid=1001,dfltgid=1001,version=9p2000.u,msize=262144 192.168.99.1 /home/docker/other_dir;
sudo chmod 775 /home/docker/other_dir;
err : exit status 1
output : chmod: changing permissions of '/home/docker/other_dir': Input/output error
Then, when I do a minikube ssh and ls -l inside /home/docker, I get this:
$ ls -l
ls: cannot access 'other_dir': Input/output error
total 0
d????????? ? ? ? ? ? other_dir
UPDATE:
After some experimenting, it looks like the problem arises when /some/dir has a user other than the current user. Why this is the case is unclear.
which version of minikube are you running ? It' working for me on minikube version v0.20.0.
minikube mount /tmp/moun/:/home/docker/pk
Mounting /tmp/moun/ into /home/docker/pk on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
It's working good and I can create file too,
$ touch /tmp/moun/cool
we can check file at,
$ minikube ssh
$ ls /home/docker/pk
cool
https://github.com/kubernetes/minikube/issues/1822
You'll need to run the minikube mount command as that user if you want to mount a folder owned by that user.

docker-compose mounted volume remain

I'm using docker-compose in one of my projects. During development i mount my source directory to a volume in one of my docker services for easy development. At the same time, I have a db service (psql) that mounts a named volume for persistent data storage.
I start by solution and everything is working fine
$ docker-compose up -d
When I check my volumes I see the named and "unnamed" (source volume).
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
The problem I experience is that, when I do
$ docker-compose down
...
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
both volumes remain. Every time I run
$ docker-compose down
$ docker-compose up -d
a new volume is created for my source mount
$ docker volume ls
DRIVER VOLUME NAME
local 19181286b19c0c3f5b67d7d1f0e3f237c83317816acbdf4223328fdf46046518
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
I know that this will not happen on my deployment server, since it will not mount the source, but is there a way to not make the mounted source persistent?
You can use the --rm option in docker run. To use it with docker-compose you can use
docker-compose rm -v after stopping your containers with docker-compose stop
If you go through the docs about Data volumes , its mentioned that
Data volumes persist even if the container itself is deleted.
So that means, stopping a container will not remove the volumes it created, whether named or anonymous.
Now if you read further down to Removing volumes
A Docker data volume persists after a container is deleted. You can
create named or anonymous volumes. Named volumes have a specific
source form outside the container, for example awesome:/bar. Anonymous
volumes have no specific source. When the container is deleted, you
should instruct the Docker Engine daemon to clean up anonymous
volumes. To do this, use the --rm option, for example:
$ docker run --rm -v /foo -v awesome:/bar busybox top
This command creates an anonymous /foo volume. When the container is
removed, the Docker Engine removes the /foo volume but not the awesome
volume.
Just remove volumes with the down command:
docker-compose down -v