I try to mount a folder of my wd my cloud NAS in raspbian, but I always get Permission denied.
What I've done:
Adding folder on my cloud to /etc/exports
/nfs/raspberry 192.168.178.26(rw,subtree_check)
then
service nfs-kernel-server restart
exportfs -a
On client showmount -e 192.168.178.23 shows
/nfs *
/nfs/raspberry 192.168.178.26
Then trying to mount and test:
sudo mount -t nfs -o v3,rw,soft,nolock,wsize=8192,rsize=16384 192.168.178.23:/nfs/raspberry nas/
pi#raspberrypi ~ $ ls nas
ls: cannot open directory nas: Permission denied
It shows that I don't have permission to /nfs/raspberry.
Did I miss something?
The problem was that I've added the folder I want to mount with the WD GUI,
when I add the folder by ssh and set its owner/group to root/share and then adding folder to /etc/exports it works fine.
Related
I want to run exportfs -a command, but instead to run directly on host, I want to run in a privileged container on the host. It means I need to mount some files/directories into containers, so exportfs -a will take effect on hosts.
I mount the following:
/etc/exports and /etc/exports.d
all directories listed in /etc/exports and etc/exports.d
/var/lib/nfs/etab
But when I modify /etc/exports and run exportfs -a, nothing changes in /var/lib/nfs/etab
Any ideas? Thanks.
To mount /var/lib/nfs instead of /var/lib/nfs/etab can solve this problem.
It is a permission issue that /var/lib/nfs should be writable.
I want to be able to fire commands at my instance with gcloud because it handles auth for me. This works well but how do I run them with sudo/root access?
For example I can copy files to my accounts folder:
gcloud compute scp --recurse myinst:/home/me/zzz /test --zone us-east1-b
But I cant copy to /tmp:
gcloud compute scp --recurse /test myinst:/tmp --zone us-east1-b
pscp: unable to open directory /tmp/.pki: permission denied
19.32.38.265147.log | 0 kB | 0.4 kB/s | ETA: 00:00:00 | 100%
pscp: unable to open /tmp/ks-script-uqygub: permission denied
What is the right way to run "gcloud compute scp" with sudo? Just to be clear, I of course can ssh into the instance and run sudo interactively
Edit: for now im just editing the permissions on the remote host
Just so I'm understanding correctly, are you trying to copy FROM the remote /tmp folder, or TO it? This question sounds like you're trying to copy to it, but the code says you're trying to copy from it.
This has worked for me in the past for copying from my local drive to a remote drive, though I have some concern over running sudo remotely:
gcloud compute scp myfile.txt [gce_user]#myinst:~/myfile.txt --project=[project_name];
gcloud compute ssh [gce_user]#myinst --command 'sudo cp ~/myfile.txt /tmp/' --project=[project_name];
You would reverse the process (and obviously rewrite the direction and sequence of the commands) if you needed to remotely access the contents of /tmp and then copy them down to your local drive.
Hope this helps!
Kinda what it says on the tin. I try doing minikube mount /some/dir:/home/docker/other_dir &, and it fails with the following error:
Mounting /some/dir into /home/docker/other_dir on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
ssh command error:
command :
sudo mkdir -p /home/docker/other_dir || true;
sudo mount -t 9p -o trans=tcp,port=38902,dfltuid=1001,dfltgid=1001,version=9p2000.u,msize=262144 192.168.99.1 /home/docker/other_dir;
sudo chmod 775 /home/docker/other_dir;
err : exit status 1
output : chmod: changing permissions of '/home/docker/other_dir': Input/output error
Then, when I do a minikube ssh and ls -l inside /home/docker, I get this:
$ ls -l
ls: cannot access 'other_dir': Input/output error
total 0
d????????? ? ? ? ? ? other_dir
UPDATE:
After some experimenting, it looks like the problem arises when /some/dir has a user other than the current user. Why this is the case is unclear.
which version of minikube are you running ? It' working for me on minikube version v0.20.0.
minikube mount /tmp/moun/:/home/docker/pk
Mounting /tmp/moun/ into /home/docker/pk on the minikube VM
This daemon process needs to stay alive for the mount to still be accessible...
ufs starting
It's working good and I can create file too,
$ touch /tmp/moun/cool
we can check file at,
$ minikube ssh
$ ls /home/docker/pk
cool
https://github.com/kubernetes/minikube/issues/1822
You'll need to run the minikube mount command as that user if you want to mount a folder owned by that user.
After migrating the image type from container-vm to cos for the nodes of a GKE cluster, it seems no longer possible to mount a NFS volume for a pod.
The problem seems to be missing NFS client libraries, as a mount command from command line fails on all COS versions I tried (cos-stable-58-9334-62-0, cos-beta-59-9460-20-0, cos-dev-60-9540-0-0).
sudo mount -t nfs mynfsserver:/myshare /mnt
fails with
mount: wrong fs type, bad option, bad superblock on mynfsserver:/myshare,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount.<type> helper program)
But this contradicts the supported volume types listed here:
https://cloud.google.com/container-engine/docs/node-image-migration#storage_driver_support
Mounting a NFS volume in a pod works in a pool with image-type container-vm but not with cos.
With cos I get following messages with kubectl describe pod:
MountVolume.SetUp failed for volume "kubernetes.io/nfs/b6e6cf44-41e7-11e7-8b00-42010a840079-nfs-mandant1" (spec.Name: "nfs-mandant1") pod "b6e6cf44-41e7-11e7-8b00-42010a840079" (UID: "b6e6cf44-41e7-11e7-8b00-42010a840079") with: mount failed: exit status 1
Mounting command: /home/kubernetes/containerized_mounter/mounter
Mounting arguments: singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1 nfs []
Output: Mount failed: Mount failed: exit status 32
Mounting command: chroot
Mounting arguments: [/home/kubernetes/containerized_mounter/rootfs mount -t nfs singlefs-1-vm:/data/mandant1 /var/lib/kubelet/pods/b6e6cf44-41e7-11e7-8b00-42010a840079/volumes/kubernetes.io~nfs/nfs-mandant1]
Output: mount.nfs: Failed to resolve server singlefs-1-vm: Temporary failure in name resolution
Martin, are you setting up the mounts manually (executing mount yourself), or are you letting kubernetes do it on your behalf via a pod referencing an NFS volume?
The former will not work. The later will. As you've discovered COS does not ship with NFS client libraries, so GKE gets around this by setting up a chroot (at /home/kubernetes/containerized_mounter/rootfs) with the required binaries and calling mount inside that.
I've nicked the solution #saad-ali mentioned above, from the kubernetes project, to make this work.
To be concrete, I've added the following to my cloud-config:
# This script creates a chroot environment containing the tools needed to mount an nfs drive
- path: /tmp/mount_config.sh
permissions: 0755
owner: root
content: |
#!/bin/sh
set +x # For debugging
export USER=root
export HOME=/home/dockerrunner
mkdir -p /tmp/mount_chroot
chmod a+x /tmp/mount_chroot
cd /tmp/
echo "Sleeping for 30 seconds because toolbox pull fails otherwise"
sleep 30
toolbox --bind /tmp /google-cloud-sdk/bin/gsutil cp gs://<uploaded-file-bucket>/mounter.tar /tmp/mounter.tar
tar xf /tmp/mounter.tar -C /tmp/mount_chroot/
mount --bind /tmp/mount_chroot /tmp/mount_chroot
mount -o remount, exec /tmp/mount_chroot
mount --rbind /proc /tmp/mount_chroot/proc
mount --rbind /dev /tmp/mount_chroot/dev
mount --rbind /tmp /tmp/mount_chroot/tmp
mount --rbind /mnt /tmp/mount_chroot/mnt
The uploaded-file-bucket container the chroom image the kube team has created, downloaded from: https://storage.googleapis.com/kubernetes-release/gci-mounter/mounter.tar
Then, the runcmd for the cloud-config looks something like:
runcmd:
- /tmp/mount_config.sh
- mkdir -p /mnt/disks/nfs_mount
- chroot /tmp/mount_chroot /bin/mount -t nfs -o rw nfsserver:/sftp /mnt/disks/nfs_mount
This works. Ugly as hell, but it'll have to do for now.
I am trying to mount a solaris sparc 5.10 dir over solaris sparc 5.11 like
root> mount S5.10Machine:/mydir /mydir
and I am getting
nfs mount S5.10Machine:/mydir permission denied.
I have given 777 permission on S5.10Machine:/mydir
If you trying to mount nfs share, be sure that you have premittions in /etc/dfs/dfstab.
man share
There is rw option, which set right to access.
And if you want to mount nfs share, right command will be:
mount -F nfs S5.10Machine:/mydir /mydir