Mounting NFS volume in Google Container Engine with Container OS (COS) - kubernetes

After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.
The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).
sudo mount -t nfs mynfsserver:/myshare /mnt
fails with
mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
But this contradicts the supported volume types listed here:
https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support
Mounting a NFS volume in a pod works in a pool with image-type container-vm but not with cos.
With cos I get following messages with kubectl describe pod:
MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs []
Output: Mount failed: Mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1]
Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution

Martin, are you setting up the mounts manually (executing mount yourself), or are you letting kubernetes do it on your behalf via a pod referencing an NFS volume?
The former will not work. The later will. As you've discovered COS does not ship with NFS client libraries, so GKE gets around this by setting up a chroot (at /home/kubernetes/containerized_mounter/rootfs) with the required binaries and calling mount inside that.

I've nicked the solution #saad-ali mentioned above, from the kubernetes project, to make this work.
To be concrete, I've added the following to my cloud-config:
# This script creates a chroot environment containing the tools needed to mount an nfs drive
- path: /tmp/mount_config.sh
permissions: 0755
owner: root
content: |
#!/bin/sh
set +x # For debugging
export USER=root
export HOME=/home/dockerrunner
mkdir -p /tmp/mount_chroot
chmod a+x /tmp/mount_chroot
cd /tmp/
echo "Sleeping for 30 seconds because toolbox pull fails otherwise"
sleep 30
toolbox --bind /tmp /google-cloud-sdk/bin/gsutil cp gs://<uploaded-file-bucket>/mounter.tar /tmp/mounter.tar
tar xf /tmp/mounter.tar -C /tmp/mount_chroot/
mount --bind /tmp/mount_chroot /tmp/mount_chroot
mount -o remount, exec /tmp/mount_chroot
mount --rbind /proc /tmp/mount_chroot/proc
mount --rbind /dev /tmp/mount_chroot/dev
mount --rbind /tmp /tmp/mount_chroot/tmp
mount --rbind /mnt /tmp/mount_chroot/mnt
The uploaded-file-bucket container the chroom image the kube team has created, downloaded from: https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar
Then, the runcmd for the cloud-config looks something like:
runcmd:
- /tmp/mount_config.sh
- mkdir -p /mnt/disks/nfs_mount
- chroot /tmp/mount_chroot /bin/mount -t nfs -o rw nfsserver:/sftp /mnt/disks/nfs_mount
This works. Ugly as hell, but it'll have to do for now.

Related

NFS mount display confused occasionally

we use cmd "**mount -t nfs -o nfsvers=3,rw,bg,soft,nointr,rsize=262144,wsize=262144,tcp,actimeo=0,timeo=600,retrans=3,nolock,sync /ip1/**pathA /mnt", mount success. But we use mount to check, it`s display /ip1/**pathB **is mounted on /mnt
However, **pathB **is a file system that was created a long time ago but no longer exists.
we guess there is some cache in NFSServer?
nfs client is CentOS Linux release 7.6.1810 (Core)
we want know the root cause

Two Volumes are created when I run a docker container with volume mapping

I am creating postgreSQL container using following command
sudo docker run -d --name=pg -p 5432:5432 -e POSTGRES_PASSWORD=secret -e PGDATA=/pgdata -v pg:/pgdata postgres
After running this container when I check volumes by running following command
sudo docker volume ls
DRIVER VOLUME NAME
local 6d283475c6fe923155018c847f2c607c464244cb6767dd37a579824cf8c7e612
local pg
I get two volumes. pg volume is created in the command but what the second volume is??
If you look at the Docker Hub decomposition of the postgres image you will notice it has a declaration
VOLUME ["/var/lib/postgresql/data"]
If you don't explicitly mount something else on that directory, Docker will create an anonymous volume and mount it there for you. This behaves identically to a named volume except that it doesn't have a specific name.
docker inspect mostly dumps out low-level diagnostic information, but it should include the mount information, and you should see two volume mounts, one with the anonymous volume on the default PostgreSQL data directory and a second matching the explicit mount on /pgdata.

Is it possible to mount multiple volumes when starting minikube?

I tried this but didn't work:
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user/app1:/minikube-host/app1 \
--mount-string /home/user/app2:/minikube-host/app2
but only /home/user/app2 was mounted.
You can run multiple mount commands after starting your minikube to mount the different folders:
minikube mount /home/user/app1:/minikube-host/app1
minikube mount /home/user/app2:/minikube-host/app2
This will mount multiple folders in minikube .
There is no need for multiple volumes when starting in your case.
Also, minikube mount after start needs a terminal in running state(opened always).
You can mount /home/user -> /minikube-host. All the folders inside /home/user will be inside VM at /minikube-host.
/home/user/app1 will be available inside VM as
/minikube-host/app1
/home/user/app2 will be available inside VM as
/minikube-host/app2
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user:/minikube-host
Hope this helps !
Currently there is no way. Even using "minikube mount" you need to run each command in separate terminal, what is completely unusable

Minikube mount fails with input/output error

Kinda what it says on the tin. I try doing minikube mount /some/dir:/home/docker/other_dir &, and it fails with the following error:
Mounting /some/dir into /home/docker/other_dir on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
ssh command error:
command :
sudo mkdir -p /home/docker/other_dir || true;
sudo mount -t 9p -o trans=tcp,port=38902,dfltuid=1001,dfltgid=1001,version=9p2000.u,msize=262144 192.168.99.1 /home/docker/other_dir;
sudo chmod 775 /home/docker/other_dir;
err : exit status 1
output : chmod: changing permissions of '/home/docker/other_dir': Input/output error
Then, when I do a minikube ssh and ls -l inside /home/docker, I get this:
$ ls -l
ls: cannot access 'other_dir': Input/output error
total 0
d????????? ? ? ? ? ? other_dir
UPDATE:
After some experimenting, it looks like the problem arises when /some/dir has a user other than the current user. Why this is the case is unclear.
which version of minikube are you running ? It' working for me on minikube version v0.20.0.
minikube mount /tmp/moun/:/home/docker/pk
Mounting /tmp/moun/ into /home/docker/pk on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
It's working good and I can create file too,
$ touch /tmp/moun/cool
we can check file at,
$ minikube ssh
$ ls /home/docker/pk
cool
https://github.com/kubernetes/minikube/issues/1822
You'll need to run the minikube mount command as that user if you want to mount a folder owned by that user.

docker-compose mounted volume remain

I'm using docker-compose in one of my projects. During development i mount my source directory to a volume in one of my docker services for easy development. At the same time, I have a db service (psql) that mounts a named volume for persistent data storage.
I start by solution and everything is working fine
$ docker-compose up -d
When I check my volumes I see the named and "unnamed" (source volume).
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
The problem I experience is that, when I do
$ docker-compose down
...
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
both volumes remain. Every time I run
$ docker-compose down
$ docker-compose up -d
a new volume is created for my source mount
$ docker volume ls
DRIVER VOLUME NAME
local 19181286b19c0c3f5b67d7d1f0e3f237c83317816acbdf4223328fdf46046518
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
I know that this will not happen on my deployment server, since it will not mount the source, but is there a way to not make the mounted source persistent?
You can use the --rm option in docker run. To use it with docker-compose you can use
docker-compose rm -v after stopping your containers with docker-compose stop
If you go through the docs about Data volumes , its mentioned that
Data volumes persist even if the container itself is deleted.
So that means, stopping a container will not remove the volumes it created, whether named or anonymous.
Now if you read further down to Removing volumes
A Docker data volume persists after a container is deleted. You can
create named or anonymous volumes. Named volumes have a specific
source form outside the container, for example awesome:/bar. Anonymous
volumes have no specific source. When the container is deleted, you
should instruct the Docker Engine daemon to clean up anonymous
volumes. To do this, use the --rm option, for example:
$ docker run --rm -v /foo -v awesome:/bar busybox top
This command creates an anonymous /foo volume. When the container is
removed, the Docker Engine removes the /foo volume but not the awesome
volume.
Just remove volumes with the down command:
docker-compose down -v