Minikube mount fails with input/output error - kubernetes

Kinda what it says on the tin. I try doing minikube mount /some/dir:/home/docker/other_dir &, and it fails with the following error:
Mounting /some/dir into /home/docker/other_dir on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
ssh command error:
command :
sudo mkdir -p /home/docker/other_dir || true;
sudo mount -t 9p -o trans=tcp,port=38902,dfltuid=1001,dfltgid=1001,version=9p2000.u,msize=262144 192.168.99.1 /home/docker/other_dir;
sudo chmod 775 /home/docker/other_dir;
err : exit status 1
output : chmod: changing permissions of '/home/docker/other_dir': Input/output error
Then, when I do a minikube ssh and ls -l inside /home/docker, I get this:
$ ls -l
ls: cannot access 'other_dir': Input/output error
total 0
d????????? ? ? ? ? ? other_dir
UPDATE:
After some experimenting, it looks like the problem arises when /some/dir has a user other than the current user. Why this is the case is unclear.

which version of minikube are you running ? It' working for me on minikube version v0.20.0.
minikube mount /tmp/moun/:/home/docker/pk
Mounting /tmp/moun/ into /home/docker/pk on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
It's working good and I can create file too,
$ touch /tmp/moun/cool
we can check file at,
$ minikube ssh
$ ls /home/docker/pk
cool

https://github.com/kubernetes/minikube/issues/1822
You'll need to run the minikube mount command as that user if you want to mount a folder owned by that user.

Related

Minikube: bash: /usr/local/bin/minikube: No such file or directory

I just installed Minikube for my Kubernetes local setup on Ubuntu 18.04 using the following command:
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
However, when I run the command:
minikube start
I get the following error:
bash: /usr/local/bin/minikube: No such file or directory
I'm really wondering what the issue will be.
I just figured it out after some research and trial.
Here's how I fixed it:
I simply closed that terminal and opened a new one, and ran the command again:
minikube start
OR
minikube start --driver=virtualbox
And it worked fine.
Note: By default minikube attempts to use Docker as the driver, but you specify VirtualBox as your preferred driver, which has some advantages.
Another way would have been to reload the Ubuntu bash terminal:
bash --login
Note:
If all the above techniques do not work, you add the Minikube executable to your path:
sudo mv minikube /usr/local/bin
You can then verify the Minikube executable path:
which minikube.
That's all.
I hope this helps

Minikube won't start on mac

Trying to start minikube on mac. Virtualization is being provided by VirtualBox.
$ minikube start
😄 minikube v1.1.0 on darwin (amd64)
🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
❌ Unable to load cached images: loading cached images: loading image /Users/paul/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.14.2: Docker load /tmp/kube-proxy_v1.14.2: command failed: docker load -i /tmp/kube-proxy_v1.14.2
stdout:
stderr: open /var/lib/docker/image/overlay2/layerdb/tmp/write-set-542676317/diff: read-only file system
: Process exited with status 1
💣 Failed to setup certs: pre-copy: command failed: sudo rm -f /var/lib/minikube/certs/ca.crt
stdout:
stderr: rm: cannot remove '/var/lib/minikube/certs/ca.crt': Input/output error
: Process exited with status 1
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
Trying minikube delete followed by minikube start produces the same issue.
Docker is running and is signed in.
I also deleted all machines in virtualbox after minikube delete and still get the same result.
According to What if I answer a question in a comment? I am adding answer as well since many people dont read comments.
You can try to delete local config in MINIKUBE_HOME before starting minikube
rm -rf ~/.minikube
Try
minikube delete
and
minikube start

Is it possible to mount multiple volumes when starting minikube?

I tried this but didn't work:
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user/app1:/minikube-host/app1 \
--mount-string /home/user/app2:/minikube-host/app2
but only /home/user/app2 was mounted.
You can run multiple mount commands after starting your minikube to mount the different folders:
minikube mount /home/user/app1:/minikube-host/app1
minikube mount /home/user/app2:/minikube-host/app2
This will mount multiple folders in minikube .
There is no need for multiple volumes when starting in your case.
Also, minikube mount after start needs a terminal in running state(opened always).
You can mount /home/user -> /minikube-host. All the folders inside /home/user will be inside VM at /minikube-host.
/home/user/app1 will be available inside VM as
/minikube-host/app1
/home/user/app2 will be available inside VM as
/minikube-host/app2
minikube start --vm-driver=hyperkit --memory 8192 --mount \
--mount-string /home/user:/minikube-host
Hope this helps !
Currently there is no way. Even using "minikube mount" you need to run each command in separate terminal, what is completely unusable

Got permission denied while trying to connect to the Docker daemon socket while executing docker stop

I have 3 containers running on my docker, and I need to stop all of them using the following:
sudo docker stop $(docker ps -q)
When a run the command I got this message:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.32/containers/json: dial unix /var/run/docker.sock: connect: permission denied
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
I made some search, and the cases that message show does not apply to my environment. I'm using Ubuntu 16.04 LTS with Docker version 17.09.0-ce, build afdb6d4
What does this message mean?
sudo usermod -a -G docker $USER
Reboot then run:
docker container run hello-world
it worked for me on ubuntu 18.2
If you are getting "permission denied" that probably means you haven't added yourself in users group which can operate upon docker. To fix that, go to your terminal and type:
sudo usermod -aG docker <name-of-user-to-grant-permission>
The 'docker' parameter is group created upon installing docker, and you can check that by typing:
getent group | grep docker
And the second parameter is the user you are adding to the group. The list of users you can check by typing:
getent passwd
For more information about command usermod you can find here.
UPDATED:
I installed docker again and just remembered that when you apply this command you need to restart your machine.
It seems your user cannot use docker command, so you need to run it via sudo in parentheses as well:
sudo docker stop $(sudo docker ps -q)

Mounting NFS volume in Google Container Engine with Container OS (COS)

After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.
The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).
sudo mount -t nfs mynfsserver:/myshare /mnt
fails with
mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
But this contradicts the supported volume types listed here:
https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support
Mounting a NFS volume in a pod works in a pool with image-type container-vm but not with cos.
With cos I get following messages with kubectl describe pod:
MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs []
Output: Mount failed: Mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1]
Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution
Martin, are you setting up the mounts manually (executing mount yourself), or are you letting kubernetes do it on your behalf via a pod referencing an NFS volume?
The former will not work. The later will. As you've discovered COS does not ship with NFS client libraries, so GKE gets around this by setting up a chroot (at /home/kubernetes/containerized_mounter/rootfs) with the required binaries and calling mount inside that.
I've nicked the solution #saad-ali mentioned above, from the kubernetes project, to make this work.
To be concrete, I've added the following to my cloud-config:
# This script creates a chroot environment containing the tools needed to mount an nfs drive
- path: /tmp/mount_config.sh
permissions: 0755
owner: root
content: |
#!/bin/sh
set +x # For debugging
export USER=root
export HOME=/home/dockerrunner
mkdir -p /tmp/mount_chroot
chmod a+x /tmp/mount_chroot
cd /tmp/
echo "Sleeping for 30 seconds because toolbox pull fails otherwise"
sleep 30
toolbox --bind /tmp /google-cloud-sdk/bin/gsutil cp gs://<uploaded-file-bucket>/mounter.tar /tmp/mounter.tar
tar xf /tmp/mounter.tar -C /tmp/mount_chroot/
mount --bind /tmp/mount_chroot /tmp/mount_chroot
mount -o remount, exec /tmp/mount_chroot
mount --rbind /proc /tmp/mount_chroot/proc
mount --rbind /dev /tmp/mount_chroot/dev
mount --rbind /tmp /tmp/mount_chroot/tmp
mount --rbind /mnt /tmp/mount_chroot/mnt
The uploaded-file-bucket container the chroom image the kube team has created, downloaded from: https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar
Then, the runcmd for the cloud-config looks something like:
runcmd:
- /tmp/mount_config.sh
- mkdir -p /mnt/disks/nfs_mount
- chroot /tmp/mount_chroot /bin/mount -t nfs -o rw nfsserver:/sftp /mnt/disks/nfs_mount
This works. Ugly as hell, but it'll have to do for now.